From the Shop Floor to the Cloud: AI in Metal Manufacturing
E13

From the Shop Floor to the Cloud: AI in Metal Manufacturing

Luke: Back to the smart metals podcast.

Hi.

I'm luke van enkhuizen.

I'm together with my co host here

Denis: Denis Gontcharov

Luke: and we dive into designing
and implementing smart factories

for the metals industry, helping
you become more productive.

Today, Dennis and I will dive into
the topic of taking your shop floor

data to the cloud and leveraging AI.

We explore, how that
works, what are the .Common

misconceptions and how you can
succeed in both enterprise, and

particularly also as an SMB.

so Dennis,

Perhaps , you can start off , because
you are focusing on this as well in

your consulting right now, right?

Denis: That's correct.

Look for the last two years, I've mainly
been focused on helping metals process

manufacturing companies to transfer
data from their shop floor all the way

to the cloud so that they can use AI.

Essentially, I always break it down
into having a solid data infrastructure.

and having a solid data
strategy or approach to data.

And both of these concepts can be
summarized with two words respectively,

the unified namespace and data centric AI.

Finally, I want to point out that the
reason why we want to go to the cloud

is to leverage, not just the services
available by the cloud providers, but

also the entire marketplace that's
built around those cloud offerings.

example, you have multiple startups
or other companies developing

specific solutions and then doing
the distribution through the public

clouds of GCP, Azure and AWS.

Luke: All right, sounds excellent.

Lots to uncover here.

And I think this is particularly
relevant because who hasn't

heard about AI these days.

And I think there's a lot of
misconception about what it is and

for whom and how that actually works.

So if somebody is in the metals
industry and let's say they are

an SMB, so a medium sized company.

And they have their equipment running and

Let's say they are a process
manufacturing, so they have a metals

continuous process such as a smelter.

Maybe you could walk them
through a little bit.

What is the essence actually of AI?

Like, what is the essence of the things
that you can do with such a system?

Right,

Denis: Yes, indeed.

I think if we talk about the use case,
that's hopefully recognizable in both

process manufacturing, but also in
discrete, regardless of company size, I

would think of preventive maintenance.

At the end of the day, all of the
manufacturers I mentioned, they

all produce things using machines
and those machines break down.

And that's unfortunately very expensive.

So if you can avoid breaking down
of a machine using a solution like

AI, that's definitely going to be
recognizable all three of those audiences.

Luke: Let's paint a picture here a little
bit for someone listening in that has

these machines on the shop floor standing.

They are reasonably modern machines,
they have PLCs, they have some

automation going on on them.

And so they generate a lot of data.

And so could you maybe walk through
the steps that someone normally

would take on a high level to
get into predictive maintenance?

Denis: Yeah, of course.

Before I do that, I just want to
highlight this episode should also be very

relevant for the smaller manufacturers.

So really the sMBs of the sector.

Very often we talk about
AI and especially cloud.

Their eyes glaze over and think
that, oh, that's something

that the enterprises are doing.

We are not capable of doing that.

And the goal today is really to
dispel this myth and tell you that

the cloud is really a lot more
accessible than you may imagine.

So regarding the use case, preventive
maintenance, indeed, essentially

what we are trying to do there is a
bit like a doctor trying, but then

instead of healing after the fact,
we try to take corrective measures to

prevent the machine from getting sick.

And the way we do this is that we
constantly monitor at a high frequency.

A number of signals that we have at our
disposal, that could be things provided

by the machine, for instance, things
like temperature or speeds, forces,

accelerations energy consumption.

And we can augment by adding other
signals, for example, from external

sensors for things like vibrations.

There's a lot of companies now who are
adding magnetic sensors to their equipment

that constantly measure things like
vibrations of a motor, for instance.

Because those signals have a lot of
Potential information that can predict

or allow someone to predict When
this machine is behaving strangely

and may eventually break down

Luke: okay.

So, we have the data set then in
the machine, we see that we generate

lots of data points, high frequency.

I mean, are we talking about
milliseconds and seconds here

?
Denis: That's correct.

It can be as far as 20 millisecond data

Luke: Wow.

So really like short timeframes, but do
manufacturers do this already without

having the AI systems or is usually
this done to leverage AI systems?

Denis: Preventive maintenance
itself has to rely on rules.

Those rules can be either
developed by a human.

For instance, you can say that if
a machine value reaches a certain

threshold, it's likely to break.

The advantage of AI solutions is that
they also come up with rules, but

those rules are way more complex.

They allow the model non linear
relationships by approximation.

And at the end of the day, what this
allows is that the AI will be able to

make a prediction that's more accurate
than what's possible by our existing

rule based approaches of the past.

Luke: The idea is that.

We have a very expensive piece of
equipment that has lots of sensors,

and so we suspect that there is a
risk of a standstill or breakdown.

We don't want the machine to get sick.

And so you want to go to
the doctor preventively.

And in order to make the doctor find out
what's wrong with the machine, or when

it starts to get sick, you need to look
at the medical record of the machine.

Kind of like it needs to show data, like
monitoring the heartbeat, just like as

you're going to have your heart issue.

If you have a heart problem, they put
you on a program to check your heart

regularly, monitor your heart rate, see
if it skips any beats, and then after a

month you come with all that data to the
doctor and see if something is wrong.

So that's kind of the idea of it.

Denis: correct.

Luke: right.

Denis: One thing I have to
correct in my presentation, we

mentioned preventive maintenance.

If you are leveraging AI To
predict when to do the maintenance.

We typically talk about
predictive maintenance.

Luke: Oh, yes.

Denis: Maintenance, but

Luke: Yeah.

So eat healthy, go to the gym exercise
that I normally like sleep well, those

are the preventive things you do to
keep your body healthy, but it would

not predict that something going wrong.

But if you would just like to predict
something, that's the data part.

That's where, , there's two
different things, right?

So this is actually a good
distinction between the two of them

Denis: it's a perfect metaphor.

Exactly.

I couldn't have said it better.

You can compare predictive maintenance
to wearing a bracelet that allows

someone to predict if they're
gonna have an epileptic seizure.

That can warn you like five minutes
before, like careful, I suspect that

this person will have an epileptic
seizure in the next five to ten minutes.

But that would be like
predictive maintenance.

Luke: So we are mixing
up some definitions here.

That happens.

And I think, but it doesn't really
matter for those that are listening

because we get the idea now that
you want to use data to predict when

something might break down or get sick.

For that, you have a lot of high
frequency data at your disposal.

And for that, you need to get
your data in a certain system

that you can analyze with it.

So why would you take it to
the cloud in the first place

and not do this on premise?

Denis: Yes, exactly.

And that's really the gist
of this whole episode.

The cloud offers us a host of off
the shelf solutions for AI models.

That can be literally the models
themselves that we can leverage.

They are pre trained or we can leverage
the clouds, AI model training frameworks,

which are being used by data scientists
to keep a neat organization of all their

work, all the models that they trained
and the predictions they are made.

Finally, the cloud also offers very
large databases, which are optimized

at storing large amounts of data,
but at the same time giving a very

accessible way of getting to the data.

So very good access control.

That's also well managed for security.

So it's way more
straightforward to operate.

Manage who gets to use the
data and who can't see it.

And finally, it's also more secure
in the sense that the data is often

replicated on multiple locations.

So it's not only on one hard
drive in your plant somewhere.

And finally, the clouds has just
a lot more compute force than

your plant will likely ever have.

So if you need to do
calculations on very small data.

Large amounts of data.

We would do something like disability
computing, things like spark that allow

you to compute on very large data sets
that would just not fit on one computer.

So to answer your question, why
at the end of the day, the cloud

offers us tools and solutions that
we simply do not have on premises,

Luke: So let's say somebody is interested
in predictive maintenance in any way for

any asset that they have, what steps do
they have to take before they can see

that data popping up on the screen, turned
into information they can do things with,

they can take action upon what are in
high level, the required steps in your

experience necessary to get this done.

Denis: but in my experience, always
the biggest step in any AI project,

whether it's predictive maintenance or
scheduling optimization or something else.

It's always getting the data in a
neat, organized format that can be

used to construct the right features.

It seems to me that, and that's just
a rule of thumb, about 80 percent

of the time is actually spent on
just getting and tweaking the data.

And If you look at this problem in a
manufacturing context, we are essentially

back at the topic of a unified namespace.

Essentially, we are trying to get all
our data neatly organized in one single

place in real time so that we can then
use it to, for instance, get it to

the cloud for predictive maintenance.

The second topic that I mentioned earlier
was making the data actually better.

And that's where we land on
the topic of data centric AI.

That's something I have to define here.

Essentially, if you look at data
science and machine learning, or

AI in general, we've always thought
about AI being the construction

of AI models that learn from data.

And the way to get a better
performance was to tweak your model,

to improve your model, to choose
better models, essentially what

we call hyper parameter tuning.

This approach is difficult, especially
for manufacturers, because you

need extensive knowledge of data
science and how these models work.

It's not something I would.

see a manufacturer succeed
at least not with the current

skills they have in house.

And that's again where the cloud shines
because the cloud can essentially do most

of this work, or let's say 80 percent
of the efficiency of this work for you.

Which means that you can essentially
focus more on getting good data.

As we all know, garbage in leads
to garbage out, and there's a

lot of improvements that can
be made by focusing on the data

while keeping the model the same.

And in my view, focusing
on the data is something a

manufacturer can do really well.

There's no one else that knows the
data better than the process engineers

and the other domain experts who work
day in and day out with this data.

Luke: Okay, so step one is getting the
data ready to be leveraged by such a

system, meaning we need to Organize it.

And I think we uncovered this topic in
previous episodes already a bit more, but

for those who do not know, the unified
namespace is indeed, would you say,

a concept, a philosophy in designing
your architecture and organizing your

data in a way that can be leveraged.

I really recommend previous
episodes that we did.

For example, I think that Abraham
explained very well last time.

And I will say I think you should watch
the last episode on doing the work upfront

so that you can leverage the system.

So that's step one.

And I wholeheartedly agree with
that in all kinds of use cases,

not only for prediction, but also
actually for running the business.

The second part is
making your data better.

And you said data centric aI.

Where the concept is really about
instead of building your own and wrapping

around your own data sets, something
that might really work with what you've

created, and then you constantly tweak
and adapt, you basically turn it around

and saying, what things can I leverage?

What do I have to do
in a front to use that?

Is that kind of a summarization of that?

Denis: Yes, maybe even a
simplified version is to say

that AI equals model plus data.

In the past We kept the data constant
And we worked on the model that was

called model centric AI, whereas data
centric AI, it's also AI equals model

plus data, but here we keep the model
constant, so we don't change it.

Instead, we tweak the data and to make
it more concrete, let me give you an

example of how this is relevant for
data used for predictive maintenance.

In essence, when you think of
predictive maintenance, what

happens is that almost all the
time, your machines work flawlessly.

This is perfect.

This is actually a big challenge
because the model has a lot of

observations of when things are good.

So it learns a lot about when
things are working well, but that's

not what you really care about.

You want to know when things are wrong.

The problem is that you only
have very few data points that

show you what's wrong looks like.

So one potential avenue of improving
that, is using a thing called data

augmentation in which you create synthetic
data, meaning essentially fake data that

looks a bit like those cases, just to
have more examples to show to the model

and teach it what wrong looks like.

And things like this, even though the
model stays exactly the same, and the

data is just augmented with fake data
miraculously still very often improves

the performance of the predictions.

Again, this also highlights a second
avenue is the importance of data labeling.

If your labeling is inaccurate, for
example, if you label a problem,

but you include like 10 seconds of
data, when it's going good, it's

going to confuse the model as well.

So the message of this new AI is that
you should focus a lot on getting really

good data because that is going to
help your models learn a lot better.

Luke: So the major distinction here really
is that you don't reinvent the wheel.

And you are leveraging the inventions
of others, as you don't really change

the model, at least you try to avoid
changing the model as much as you can.

Take what you have as
your set and then saying,

Oh, let's build something on this.

We're trying to leverage things that
people done before you, for this use case.

And you're starting to think about
how can I get this thing to work?

And you mentioned even that you go as
far as to kind of simulate the issues,

to see if your predictions are working,
so that then if you're deploying it in

production, you already know that it will.

Well, at least your chances of knowing
it will come are significantly increased.

It's interesting.

Because how much data can you have by
yourself as one factory or one plant

or one use case, if you compare it
to somebody that has worked with like

the hundreds of companies, us and
data sets, particularly for this one

problem, plus a lot of simulated,.

It's for specialization is
of course key here indeed.

Denis: Yeah, I

mean, let's make it concrete.

For example, imagine I'm a small to
medium sized manufacturer of sheet metal

and I have machines that break down.

So I want to leverage predictive
maintenance to warn me and

to hopefully prevent this.

I have essentially two avenues.

Either I hire a very
expensive data scientist.

Who's going to build me
this very amazing model.

It's going to take a couple of
weeks and I have to hire someone,

but I will have a good model.

That's one avenue.

And that was typically the approach
of the past, the model based AI.

What we are proposing here, is that
instead of hiring a very expensive

data scientist, you just make
sure your data is in the cloud.

In the cloud, you will find a
model developed by Microsoft

for predictive maintenance.

In general, agreed, the model will perhaps
be less elaborate and less intricate by

the work of the data scientist, but it
is our belief that if me then as the SMB,

the manufacturer, I let my engineers focus
on getting the data in a very good state.

The combination of very good data
with an off the shelf model, which

is less impressive than the data
scientist one, will still outperform

the model of the data scientist
that is fed with mediocre data.

That's why we are proposing, if you
want to get started with AI is to do

what you're good at, improve your data,
and, then use off the shelf solutions

that you can access through the cloud.

That way you don't have the problem
of trying to get the very rare and

expensive skills of model building.

Luke: Yeah.

I totally recognize this.

I cannot really disclose the companies
that I did this for, but I had some

similar cases in my work, where we
would actually use data from operators

and machines to predict the cycle time.

By looking at the features of
a product, like, for example,

in sheet metal production.

We were looking at a sheet metal part.

Can you, by the thickness of the
sheet, the dimensions of the sheet,

and the amount of bands in the sheet,
somehow predict how long the cycle

time will be if you do not know
anything else besides these features?

And so instead of saying that
you would normally quote and say,

okay, each bend is this much time
you would make a calculation.

Can you predict it by just looking
at the features and then using the

data from the workplaces where you
register these changes and sending

it back to the model to see if
your calculation get made better?

I've seen some initiatives, but
various of my clients actually

got stuck on exactly this part.

They were trying to create a model
based on very limited data set.

So as I understood, you
need vast amounts of data.

You need significant amounts of data.

And that's usually not there.

So then you have to make assumptions
about we think it is gonna be like,

that's gonna be a good result.

You cannot really check.

And so I totally recognize
what you're saying.

This is a different use case, but
I think that the idea is the same

because there are many algorithms that
you could leverage that can actually

do this, that are indeed available.

If I've seen, you just need to
bring the data in a certain way.

You need to have enough of it.

Right.

And I think if I look
backwards, that would be in a

way better solution for that.

Denis: You make a very good
point about having enough data.

The issue we have nowadays is not having.

enough data.

We have lots of data.

It's really having enough relevant data.

In the case of predictive
maintenance, you have a lot of

data when things go right, but that
doesn't really help you that much.

You really want to have more
data when things go wrong.

And that data is really scarce.

Hopefully.

Otherwise your plan is
breaking down all the time.

Luke: Because you kind of want to go
for what's the perfect one, but the

perfect one almost never happens.

So, how can you know
what's the perfect one?

It's really hard to get that right.

And so the opposite is also true, right?

This, like I say, you want,
like predictive maintenance.

Of course, you will have a lot of
data when it goes right and you have

rarely indeed when it goes wrong.

But in other business cases, it
could be the other way around.

You rarely have it when it goes right.

And mostly when there is an
exception, it's only locked, right?

So, I think a lot of manufacturers are
also focused on finding all their faults

or standstills and issues, which is good.

But how can you do more
of what you're good at?

Right?

So this is for different use cases,
of course, then in the equation.

Great.

So back to the topic of the day.

We talked about taking your shop
floor data to the clouds and we

established that this is important.

We established that you
need to do work upfront.

To start, we mentioned the
benefits of using a new UNS

architecture or similar approaches.

What else should the manufacturer think
about before they go into the next step?

Denis: So a common question
is how do you build data?

The road or the, what we
call the data pipeline from

your shop floor to the cloud?

There's multiple ways to do this.

As we all know, in manufacturing,
you have many systems with data,

you have the automation pyramid.

One naive approach would be to connect
each of those systems to the cloud.

That's that has been done in
the past, especially if you have

large teams of developers, it's
possible, but it's very unwieldy.

So ideally you would like to combine your
data first, and that's where the approach

of the unified namespace comes in.

Companies have been building
data pipelines to the cloud even

before the unified namespace.

So essentially, that always
consisted of connecting the

historians, the EMEA system, the ERP
system to the cloud individually.

You would have like three or
four pipelines depending on how

many of those systems you have.

At this point, I don't think I can
really say which one is better, but

if I narrow down the discussion to
a SMB manufacturer, well, hopefully

it's listening to this podcast,

I really believe that the unified
namespace should be the foundation

and the starting point, let's
say like the jump server from

which all data goes to the cloud.

Luke: Yes, I think I
really agree with you here.

And you make a very important distinction.

And a question that I get a lot
regarding the cloud are questions

like, what are the costs of this?

Isn't it going to be too expensive?

What is the recurring cost of this?

And What happens to our data?

Who controls it?

We don't have it in our own hands anymore.

Denis: is a very valid
critique against the cloud.

It's also one of the main reasons why
so many migrations to the cloud have

gone wrong, and why so many projects
go over budget, why the monthly cloud

bill is way higher than estimated.

And essentially, what you described here
is what we would call a lift and shift.

A lift and shift is when you
literally Make a copy of your local

infrastructure on premises in the cloud.

Essentially, you are not leveraging
the available services in the cloud.

I don't want to get too technical
into this, but you have to think that

the cloud is essentially a new way
of organizing your infrastructure.

If you are just renting a big
computer in the cloud and copying

all your applications, You're not
using the cloud in the right way.

You're really supposed to break down
your monolithic applications into

smaller parts, into services, and
let them communicate to each other.

That way you benefit the mostly from the
clouds pay as you go financing offer.

What you described is that
the new approach of just

Connecting all your on premises systems
to their local copies in the cloud with

point to point integrations is indeed
a very expensive and brittle process.

That's why when I mentioned the unified
namespace as a good starting point, it's

not just from a technical perspective
of having all your data in one place.

As we know, the second advantage
of a unified namespace is

the semantic hierarchy.

Which already creates a
data conceptualization.

What a big mistake I often encountered
with cloud migrations, where a

place I've worked at is that we had
this massive cloud data warehouse,

with lots of data there, but no
one understood what the data was.

You may just made a copy of the columns
and someone who has never seen this data,

has no idea what column ABC means and
still has to call the local engineer.

What we want to have, we want to
have a structure up for data in the

cloud, a data model that makes sense.

It can reflect the unified namespace,
it can be something else, but it's

very important to think how we will
represent this data in the cloud,

under what name, with which location.

So as you mentioned, the data context.

Luke: Yes, yeah, this is very important.

I think we uncovered quite
some major distinctions here.

Because to get your data useful, we have
to first translate it into some structure.

And that is something you
do before you upload it.

It's indeed a major misconception.

And I think particularly now,
around the time of where ChatGPT is

coming up, where people are saying,

Oh, the more data we have, the better.

Let's just show it everything in there
and the computer does some magic thing.

It shakes his magic wand and then hop
everything will show to some conclusions.

But I think that that is
not really how it works.

And then you get this massive
as you said, lift and shift.

massive cloud bills, massive ongoing cost.

And nobody can actually still use it.

And then we haven't even talked
about that, if you would actually

do that additional approach, just
throwing everything into a data

warehouse or a data lake, it's not
accessible then to the people that

need it the most on the floor, right?

Like if you want to get that data also
on the factory floor next to the machine,

see what actually is coming in and out.

Making a little grafana dashboard of
what happened in the last 5- 10 minutes.

You know, it's gonna be really hard
to get that system going, right?

Cause you need to only, you should
also think about what's happening in

real time if you want to predict and
look into the future, you must also

know what's happened in the past and
particularly what's happening right

now to see if actually the things that
you're sending to that system is correct.

Right.

How are you going to inspect them?

Denis: Yeah, there's two problems you
mentioned here that really stood out.

The first one is essentially you
have to think about your client

who is going to use the data.

It's going to be the engineers
and the data scientists.

You have to offer them a solution that's
better than what they're currently using.

I mean, if you just copy your existing
systems, Why would they use them in

the cloud if they have them literally
at their fingertips in the plant?

That makes no sense.

So you have to provide
them with something better.

Something with more context,
with clearer data that's also

perhaps checked for errors.

You have to give them a compelling
reason to use your cloud services, right?

So that's the first point.

That's very important.

The second point you mentioned
was about making fast predictions.

Our intention today is not to say that,
the cloud is going to do everything

for you and replace everything.

The edge will always have its place.

For instance, when you have
to make a prediction really

fast on near real time data.

In that case, it just doesn't make
sense to send the data all the way to

the cloud over the internet and back.

You will just lose too
much time with the latency.

So whenever the speed of your decision
making is fast, like near real time, you

have to make this decision on the edge.

The cloud is more for making very
long term decisions on large amounts

of data for which you really need
the large compute resources, or

it can be a combination of both.

For instance, you are training an AI
model in the cloud on lots of data, but

then you are later deploying this same
model on an edge computer, right next

to the machine to make the predictions.

In that sense, the cloud and the
edge will both exist into the future

Luke: exactly.

Yeah, this is great.

For people that would like to explore
this further, and would like to know more

about if predictive maintenance is working
for them or any other use cases would

leverage systems to predict, I would say.

Where would they start?

What would you give them some words
of advice to start forming their idea,

like to form their business case or to
initiate their internal talks about this?

Denis: Yeah, that's a great
question to summarize.

First of all, start with the real problem.

There's no one better than you that knows
your process and your current challenges.

If predictive maintenance is
not a solution to one of your

problems, then don't do it.

Look for something else, perhaps
schedule optimization or something else.

You know that better than we do.

But once you have a concrete use case for
AI, for which you want to leverage AI, the

second tip would be to don't be afraid.

The cloud is not only there for
enterprise manufacturers, it's Because

of its pay as you go structure,
it's surprisingly affordable.

It's way more affordable than
hiring a data scientist or

building your own data center.

The cloud is actually
ideal for experimentation.

Just look at the startup world, they are
all leveraging the cloud from day one.

And thirdly, I would
say focus on your data.

You don't have to focus on the AI per se,
because the cloud will do that for you

by providing the off the shelf solutions.

So use them.

Focus on the things you are strong at,
making sure you have your data in order.

That means a proper
digital infrastructure.

Hopefully you're already
building a unified namespace.

And if you have one, you are
already halfway on your way

to the cloud, I would say.

And the second is focused
on data centric AI.

Optimize your data.

So only send the data that
you need to the cloud and data

that you do send to the cloud.

Make sure it's clear it has a proper
place and it's highly usable so that you

can offer a solution to your engineers.

That's better than the system
that they have on premises.

Luke: And so for the really smaller
companies, listening to this they

might be concerned about the cost
of developing such a solution.

So not to say the cloud bills, which
would be one of the equations, but

could you shed some light about the
having the conversations with their

colleagues about potential costs
and investments for such a use case.

Because I can imagine that if they see
the value of such a prediction, they

might have to convince other stakeholders
in the business to start such a project

or to investigate such a project.

What would you say to them listening,
particularly those two a bit smaller

that are not on the enterprise level?

Denis: I think it all boils down to ROI.

Look, I'm not going to sugarcoat it.

The skills or the talents to
transfer data from your shop

floor to the cloud are expensive.

That's going to cost some money, which
is why it's so important to focus on

a use case that will pay itself back.

Very often that can be the case.

If a machine stands down, for example,
a hot mill, that already can cost you.

Thousands of dollars for every
hour that it doesn't work.

Not to mention the impact
on the production schedule.

You will quickly see that if you have
a concrete use case, that this use

case should within one year, at least
within two years, pay back any costs

you may have incurred in the development
of the solution to this use case,

Luke: Right.

Denis: of thumb.

Luke: Right.

And in regarding to investments
for software, can we say

something about if there is.

Some low cost to even open source
suitable, or is this going to be mostly

proprietary technology software that
you have to buy off the shelf, to

conclude this discussion a bit because
of course it's going to be labor.

I understand that expertise is valuable.

It's probably the most valuable
investment you can make.

It's cost something, but
it will get you somewhere.

But let's talk about the
software stack to require such

a system from your experience.

Denis: Yes.

And here I have very good news
for our dear manufacturers.

We are now essentially leaving the
realm of OT and are completely going

up the pyramid all the way into IT.

And it is now pretty clear that
IT has completely ditched the

model of proprietary software.

All the software that is being
used today for this kind of

work is completely open source.

So in essence, you would pay.

Just for the labor or for a company
that organized this software

for you to give an example.

If you want to leverage Kafka
is completely open source.

You don't have to pay for
it, but it's difficult.

So you're going to pay for the labor.

If you don't want to manage a Kafka
cluster yourself, you can go to

a company like Confluent and they
will do it for you, for a fee.

All the software that is used to build
the data pipelines from the shop floor

to the cloud is strictly open source.

Luke: Right.

Denis: Built on top of
open source solutions.

Luke: Right.

And so if somebody comes to your factory
and says, we have the perfect solution

for you in software and it costs six
digits, then you probably should walk the

other direction and talk to you first.

Denis: Yes, exactly.

Luke: Great.

This is a good note to conclude our
episode on as we have established the

need to build a more strategic stack, have
a more strategic approach behind this.

Choose the right technologies, use
what already exists, use a model,

set up and train on your own.

Is there anything I forgot
to ask you or forgot to talk

about with you today, Dennis?

Denis: I think we covered
the pretty broad spectrum.

Don't be afraid of the cloud.

The main reason the bills were
high, the critique was mostly

because it was done the wrong way.

But you can learn from the
experience of the past and make

your journey to the cloud a success.

Luke: Yeah, because what happened
in the past doesn't guarantee

the future in good and bad ways.

In particular, the bad doesn't
have to repeat itself if you

do it differently, right?

Think differently, do it
on a way that is scalable.

Great.

So to conclude Dennis, you
do this as your profession.

We, I know this, but perhaps for
the listeners quickly, give them

a good introduction of what they
can expect if they work with you.

And what kind of things you
offer to them to get started.

Denis: Sure.

So I describe myself as a data engineer
who has a solid background working in OT.

Essentially, having worked in a plant
for many years, I understand SCADA

systems and the challenge that comes
from liberating data from the shop

floor through its many obstacles, be
it cultural problems or firewalls, and

getting this data connected to, for
example, a cloud like Microsoft Azure.

So I advise companies both with
strategy, but also with hands on

development of the data pipelines to
facilitate their journey to the cloud.

Luke: Excellent.

All right.

Thank you for that.

And your website will be, of course,
in the show note as always you can

find anything about me, Luke and Dennis
and of course other episodes, which I

highly recommend checking out because
a lot of topics we explored here about

the UNS, about digital transformation
and particularly the metals industry,

as we also have a specialized episode
about the new ministry, for example.

I think those will give some more
context to what we explored now, and

also give you some more actionable
insights you can get started with.

First of all, thank you for listening in.

If you found this conversation
useful, you can always join us

next time for more smart metals.

And if you know somebody would
be a great fit for the show.

Or you want to participate yourself,
have some questions, remarks,

you can always send us a message.

So for now, I conclude my part, and I
want to happily thank you, Dennis, for

attending and doing your part this time.

Denis: Look, that's great.

Luke: Alright everyone, see you next time.

Denis: Bye.

Bye.