On-Demand – What Actually Works in MAS Upgrades – Cooperative Energy’s Real-World Playbook

Featured Speakers

Avatar photo
Cohesive Solutions
Cohesive Solutions is a provider of IBM Maximo Application Suite (MAS) services, helping organizations modernize and optimize asset management. They deliver implementation, integration, and support services to improve asset performance, reduce downtime, and drive operational efficiency.

 

0:03
Hello and good morning, Good afternoon.

0:06
My name is Shannon Bond and I am part of the Cohesive Marketing team and I will be assisting the team today with the webinar as moderator.

0:15
We’ll give it another a 30 to 60 seconds to allow all attendees to join us on this call.

0:49
All right, I think we’ll get started.

0:52
Hello everyone, and welcome to our webinar on what actually works in Mass Upgrades Cooperative energies Real World Playbook.

1:01
Before we start, I would just like to highlight some administrative tasks.

1:05
Note that this webinar will be recorded and distributed post webinar.

1:09
All the attendees are currently set to Listen Only mode, but feel free to post any questions or comments in the chat or question box.

1:17
We will address these at the end of the webinar.

1:22
Should you have any technical questions or problems, please log it with Go to Webinar Technical Team in the form of a question mark or help section on the panel itself.

1:35
I’ll be handling it over to the host in a bit, but let me do an introduction of our speakers first.

1:47
We’re joined today by Melissa George, Business Systems Supervisor at Cooperative Energy.

1:54
Melissa brings over 20 years of IT experience and has spent the last nine years as a Maximo Administrator.

2:01
She’s deeply committed to supporting these teams across maintenance, operation, engineering and more, ensuring Maximo delivers real value to the cooperative’s mission of reliable energy for Mississippi communities.

2:17
Also with us is Bennett Tan, system Architect, Cohesive.

2:21
Bennett has over 13 years experience with IBM Maximo from building internal environments to leading client deployments.

2:28
For the past few years, he’s focused on maximal application and suite implementations, helping organizations modernize and scale their asset management strategies.

2:38
So without further ado, let’s dive in and hand over to our first speaker today, Melissa.

2:47
Good morning everybody.

2:48
My name is Melissa George with cooperative Energy.

2:51
As Shannon pointed out today we’re going to go over our agenda which is who we are Maximo at Cooperative Energy, some key decisions that we had to kind of consider in moving forward with this deployment, our planning process, customization and optimization, some lessons learned and then what happened post deployment.

3:16
So who is Cooperative Energy?

3:19
Well, Cooperative Energy is the only not-for-profit wholesale electric provider in headquartered in Mississippi.

3:26
We generate and transmit electricity for 11 member distribution systems located all throughout the state of Mississippi.

3:34
And together with our eleven members, we provide power to more than 445,000 homes and businesses across the state.

3:50
So you can see on this graphic, this is kind of our service territory.

3:55
As a company.

3:56
We employ more than 500 people across the state.

3:59
We have a generating capacity of 2449 megawatts, which includes natural gas, some nuclear, some hydro and some solar.

4:09
And we have more than 1900 miles of high voltage transmission lines going to more than 269 delivery points and those numbers are constantly growing.

4:25
Good morning, everyone.

4:26
My name is Bennett Tan and I’d like to take the opportunity to just talk a little bit about Cohesive.

4:31
So our goal is to help organizations get the most out of their assets by working with IBM to connect data, improve reliability and make operations smarter.

4:44
With global experience and hundreds of successful maximal projects, we bring the tools and know how to make asset management work better.

4:57
So Cohesive offers a full range of services to support Maximo, including expert advisory implementation, secure hosting and data management.

5:10
And we are grateful for the opportunity to partner with Cooperative Energy in their mass upgrade journey.

5:20
And I would just like to add to what Bennett said, we definitely consider Cohesive as a trusted advisor at Cooperative Energy.

5:28
And So what does Maximo at Cooperative Energy look like?

5:32
Well, we’ve had the product for over 10 years.

5:36
We have approximately 250 users.

5:38
Again, that number is constantly growing.

5:41
We have maximum for utilities spatial and the Oracle Connector installed and some of our most commonly used apps are work order tracking assets and locations, utilities of course and scheduler.

5:54
We have a ton of ongoing projects.

5:57
We’re currently in a maximum mobile implementation.

6:00
We are doing barcode tagging on our some of our assets at our plants.

6:05
We are working on integrating and engineering information management.

6:09
It is a fancy term for a document management system that does drawings.

6:13
And then we are working on integrating odometer readings from our AVL system and our cars back to Maximo.

6:26
And so like many of you, there are some key decisions that kind of go into deploying mass moving, migrating from Windows to that Openshift platform.

6:38
And one of the first things that you have to really think about is are you going to be on Prem, stay on Prem or you’re going to go cloud or some kind of hybrid of the two.

6:48
There’s IBM, SAS, Cohesive has cloud, Some of the other business partners out there have cloud services.

6:54
So there were some factors that kind of went into making that decision.

7:03
So support and maintenance.

7:05
Do we have the skill set and expertise internally to support Red Hat Openshift?

7:11
This is a brand new infrastructure, not necessarily familiar with it, but that was one of the the key factors that went into our decision making.

7:23
Database access.

7:24
So in the cloud, typically you lose direct database access.

7:29
I’m a bit of a control freaks, so that was kind of a show stopper for me.

7:36
And then Java customizations, I don’t know about all of you out there, but we still have a couple of Java customizations that we haven’t been able to get rid of.

7:45
And so that kind of changes the scenery a little bit.

7:49
It kind of puts you in your own little private cloud with your rogue code.

7:54
And so that also played into our decision making and then ultimately cost, you know, it costs money to to go to cloud.

8:04
So for cooperative energy, we all those factors played in and we made the decision to stay on Prem.

8:14
So one of the first things we did during our deployment was start having design workshops.

8:19
I cannot stress how important this part of the deployment process is or the migration process is to really take time to go through and do all of your planning.

8:29
So for cooperative Energy, we went with a lift and shift approach.

8:33
So basically not adding any new functionality during this process, just taking what we had in the Windows environment and shifting it over to the Red Hat open shift environment.

8:46
So we took this opportunity to kind of review our existing business processes, making sure that they aligned with Maximo Application Suite, which everything did.

9:01
We also took some time to look at our add-ons, which is, as I’ve mentioned, our spatial maximum for utilities.

9:12
And then that was it as far as add-ons and then integrations, our biggest integration is with Oracle EBS, which is our ERP.

9:25
But we have some other integrations that are some at least some points that we’ve got to kind of plan for, which is some Power BI dashboards that are based on Maximo data and GIS uses, some database views that we had to ensure still worked once we moved over.

9:44
And then those good old Java customizations, we had to take a look at those, make sure they were still going to work in the new environment.

9:53
Thankfully, everything is working.

9:58
Yeah.

10:00
We also looked at the authentication protocol.

10:02
So whether you know you’re currently locally authenticated or connected to your Active Directory, we had to identify, identify that and then make sure that that gets carried over to the new system as well.

10:20
The next thing is a security and compliance because there is a quite a big change in the overall infrastructure of how maximal application suite is running.

10:32
We have to make sure that this new infrastructure is still in compliance with cooperative energies, IT security policies.

10:45
And then looking at the infrastructure setup we had to do as part of the workshop, we had to determine how many servers do we need?

10:54
What is the network going to be like?

10:57
What are the DNS entries going to be needed?

10:59
What are the the reserved IPS?

11:02
All of those things need to be taken into consideration before the the setup begins.

11:13
And then lastly, the disaster recovery strategy.

11:16
Because of the change in the overall infrastructure, we have to review what was Cooperative Energy’s previous disaster recovery strategy and figure out if that still worked or if we had to come up with a new strategy, which was which was the case.

11:40
So I wanted to provide just a brief overview on the difference in the infrastructure and what we’re used to.

11:47
So on the right side here of the diagram is typically what a virtualized environment would look like, right?

11:55
So starting from the bottom, you’ve got your underlying infrastructure.

12:01
So typically that would be, you know, VM Ware V sphere for example.

12:05
And then you’ve got a host operating system, so exxi.

12:11
And then on top of that then you’ve got your guest operating systems, right, for your virtual machines.

12:16
So very common would be Windows Server.

12:20
And then in each of those machines then you would have your maximal instance running web sphere.

12:27
So you could have, you know, a dev test and a production environment all running on their own individual VMS.

12:38
And then over on the left side, here is what the new system looks like.

12:44
Containerization, right?

12:45
So in the infrastructure level, you’ve got your Red Hat Openshift container platform and then the host operating system, it runs on Red Hat Core OS.

12:57
And within the Openshift cluster, then you’ve got your maximal environments as pods, right?

13:06
So they no longer have their own, require their own guest operating systems.

13:11
They’re just running as pods.

13:13
And so in each of these, they could be, you know, a, a dev instance, a test instance.

13:21
And then you could have in your QA or or production have it clustered so that each of them are running, you know, their own essentially what we knew them as JVMS.

13:34
So you could have, you know, multiple UIS, multiple crons if you want to, multiple myth JVMS, or in this case would be pods.

13:48
And all of this can reside in the same open shift cluster if you like.

13:55
OK, so this is a diagram of Cooperative energies dev and test cluster, right?

14:04
So this cluster, this Red Hat Openshift cluster houses both their dev and their test mass environments, right?

14:15
So each of these nodes represents a server and it could still use a, you know, AVM server, but yeah, that’s what that’s what they are.

14:26
So in the open shift cluster, it requires a minimum of of three master notes and then worker notes is scalable.

14:34
So depending on on the workload of of mass and how many concurrent users it needs to support, you can add as many worker notes as you you like for to support cooperative energy dev and test environment.

14:56
Because there are minimal concurrent users, we’ve calculated that we’ve only needed two worker notes, right?

15:05
And so the master notes typically they require for server spec wise they require 4 CPU and 16 gigs of RAM and a hard drive space of 2120 gigabytes each.

15:21
And for worker notes is a little bit more you need 16 CPU, 64 gigs of RAM and 300 GB of hard drive space.

15:34
So that is the the server specs for the notes itself.

15:40
But apart from that, some additional things you want to have is some sort of a persistent storage.

15:49
So that’s required because we want the files to stay in a persistent manner versus what is, you know, by default the, the, the storage state in, in Red Hat Openshift for containerization, right?

16:04
So a lot of times what the, the files that goes into persistent storage would be, you know, like your attached document files.

16:15
It, it, it could use be used to store images that is required to for open shift deployment, JMS server queues messages, you want it to be persistent as well.

16:28
So that goes in there.

16:31
So that’s, that’s in this case for cooperative energy, they’re using net app for their persistent storage.

16:39
And then on top of that, there is a admin machine that is used.

16:44
And this can just be a Red Hat Enterprise, enterprise Linux machine.

16:50
And it’s used to interact with the open shift console so that you can give it OC commands to the cluster, to the class cluster.

17:03
And then you also have your database for cooperative energy, it is running on Oracle 19-C that it doesn’t change much from maximal 76.

17:17
And then there’s also a a separate Oracle 19-C database for EBS.

17:26
OK.

17:26
And this is a diagram for the PROD and Dr.

17:30
cluster.

17:31
So the infrastructure is quite similar to what was just shown.

17:37
Looking at the left side here is is the prod cluster.

17:40
And in this case, we’ve got 3 master notes and three worker notes.

17:46
The the specs is a little bit more for the master notes.

17:50
It’s at 8 CPU, 32 gigs of RAM, memory is 200 gigabytes.

17:56
Worker notes is the same 16 CPU, 64 gigs RAM, 400 gigabytes.

18:03
And then it’s for the production cluster.

18:07
It’s got its persistent storage as well, housing, you know, all the attached documents, GMs queues, images required.

18:16
And then it’s got its own admin server as well, again running on Red Hat Enterprise Linux.

18:23
And then it’s also got its own managed database and EPS database.

18:31
And what this for the database, it is being backed up on a regular basis for the manage and the EDS database.

18:40
And then on the right side here on a a different data center is a, another cluster for Dr.

18:49
And the idea is for the Dr.

18:52
infrastructure to essentially mirror the production infrastructure.

18:57
So you’ve got, you still got your three master notes, 3 worker notes, all spec the same.

19:03
And then it’s got its own persistent storage admin machine and databases as well.

19:09
And the idea is for to have this cluster basically on standby and it’s ready in case of a disaster scenario, right?

19:24
The databases that are being backed up will get sent over to and copied over into the databases that is in doctor cluster.

19:37
OK, So what does the general deployment process look like when you’re deploying Maximum application Suite, right?

19:47
So first thing we have to do is to deploy Red Hat Openshift.

19:51
And there are many different ways to deploy Red Hat Openshift.

19:55
For cooperative energy, we use the assisted installer.

19:59
It’s probably the the simplest method to get it done, but it does require access to the organization’s Red Hat portal to perform this, so.

20:12
This was a good a collaborative effort between cohesive and cooperative energy IT since they have access to their Red Hat portal.

20:23
So we work together to ignite the Openshift cluster and then once so once Openshift is set up, then we can start looking at installing mass and manage.

20:37
And before we install MASS, we have to make sure that the persistent storage is provided, right?

20:45
So in this case, again, it’s net app for cooperative energy and.

20:52
And then when that’s when that’s set up, we can start to install MASS again.

20:59
MASS has a couple of different ways to be installed in.

21:04
In our case we use the Ansible scripts to install MASS and then once mass is installed then we look at activating manage itself.

21:15
As part of the activation it will run the update DB process to upgrade maximal from 76 to mass 8 or or 9 in this case.

21:29
And then once you’ve got your manage set up then you can log in and then can continue to set up your dock links, your GMs queues and for SSL certificates.

21:43
I know for maximum 76 that was an optional thing right?

21:49
For maximum application suite it will it is default by default set up to use SSL and it will use IBM self signed certificates by default.

22:03
But you typically get a, you know, a warning in the browser, right, saying that it’s not universally recognized.

22:10
And so we’ve had to generate some SSL certificates signed by Cooperative Energy and replace them instead so that it would be recognized by the organization’s browsers.

22:27
Yeah.

22:27
Before we move on from SSL certificates, I just want to mention that since the deployment, we’ve actually went back and got externally signed SSL certificates for it to work with mobile.

22:39
So just a little nugget there.

22:41
So as far as the integration goes, during the deployment process once, once manage is installed, it creates a Oracle API folder with all of the new delivered code for EBS.

22:58
And so my team on the cooperative side, once we got that folder from them, we’re able to go kind of look at the new objects that were delivered and compare them to our existing database objects and then kind of marry any changes.

23:13
And we did all of that installation on our side.

23:16
We also updated our GIS database view with a new database link and updated some links that were on our dashboards for Power BI.

23:28
So we did that on our side.

23:34
OK, so for I’m sure a lot of folks that are considering a maximum application suite upgrade have heard about app points, right?

23:43
And app points is what required is, is what’s required for organizations to get access to maximum application suite.

23:53
So typically we would or organizations would get a pool of app points, right, depending on the needs of how many users are logging into the system and you can decide how you want those app points to be distributed, right.

24:12
In cooperative energies case, we’ve decided to split up the app point pool.

24:17
So we’ve got a non prod, non production pool.

24:22
I believe that was around 200 app points and then a production pool which is about 450 app points.

24:30
And those app points are registered against what’s called the suite license service.

24:36
A suite license service is typically the suite license service is typically deployed in the Red Hat Openshift cluster itself.

24:45
And so for the dev and test cluster, it’s got its own SLS deployed in in the Openshift in the dev Openshift cluster, right?

24:59
And and the non production app points is registered against that SLS instance.

25:06
And then for the production open shift cluster, it’s got its own instance of SLS and it’s using the production app points.

25:19
And then in the Dr.

25:24
cluster in a separate data center, those same production app points is also registered in the SLS instance for the Dr.

25:34
open shift.

25:36
And that’s because the assumption is that those app points will never be utilized in the Dr.

25:43
open shift unless production is down, right.

25:47
So.

25:47
So there’s no conflict or or double usage of app points.

25:58
Excuse me.

25:59
So for the custom Java classes, the main thing that you want to remember during this phase is to make sure that your file path is identical to how it was in Windows.

26:13
So you create that file path, you drop those class files in there, and then you zip that up and you go into mass and create archive and point the file address to a web server.

26:27
You have to have a web server and we have ours running on our admin machine.

26:31
And so you just point it to that web server and that serves up your customizations.

26:41
So during this deployment we came to realize that LDAP S secure is required and we had never used that before.

26:52
We had just used LDAP and So what that means is that you have to have a certificate.

26:58
So one morning after the deployment, I am sitting at the table at my house drinking coffee waiting to go to work and I get a phone call and it’s a user saying that they are not able to log in.

27:14
So I let the user know, hey, I’ll be at in the office in 15 minutes or so, so I’ll take a look when I get there.

27:22
So from the time it took me to drive from my house to work and log in, I now have a whole host of emails from people saying they’re not able to log in.

27:33
And so of course, I try to log in and I get that standard message, unable to log in, See your System Administrator.

27:41
I’m like, well, that’s me.

27:42
So I start doing some digging.

27:45
You know what’s going on here.

27:46
I had a local account, so I tried to log in with my local account.

27:49
Worked just fine.

27:50
OK, well that lets me know it’s something to do with the LDAP authentication.

27:56
So I call my IT group and I asked them what changed.

28:02
So yesterday my users were able to log into Maximum.

28:05
Today they’re not but changed and they told me that they had did some updates on our domain controllers and rebooted them but that should not have caused an issue.

28:15
And I don’t know if y’all can see the screenshot here, but the certificate at the bottom at the time it was still valid.

28:21
The date was still valid.

28:23
So I was very confused.

28:24
It looked like a valid certificate.

28:26
I couldn’t figure out why in the world the users couldn’t log in.

28:30
But what I did learn is when they installed those updates and restarted that domain controller, it was close enough to the expiration date that that domain controller reached out and got a new certificate.

28:41
Well, it did not match Maximo, so there was a conflict.

28:46
Now thankfully you can’t see it in this screenshot, but right below that is a little retrieve button.

28:51
As you just click the retrieve button, it goes out and grabs the the latest certificate and all is well in the world again.

28:57
But we were probably down for four hours or so trying to figure this out.

29:02
So fair warning on LDAP S if that’s the authentication method that you use.

29:10
Great story Melissa.

29:13
And then we wanted to highlights an open shift upgrade that had to take place as part of the project.

29:19
This is typically not the case in in a project schedule, but it just so happened for us that when we started the first installation of Open Shift, it was at 4.12.

29:34
And then, you know, throughout the duration of the entire project.

29:38
Near the end, we found out that 4.12 was going was going to fall out of support right away here.

29:47
And we didn’t want to let cooperative energy go live just to fall out of support right away for Openshift.

29:56
So we had to perform an upgrade from 4.12 to 4.15 before the go live and then back in Mass 8 as well.

30:12
There was a concept called Dynamic Catalogs and by default the approval to upgrade MASS is set to automatic.

30:29
And with dynamic catalogs, as IBM releases new versions of MASS because it set the upgrade approval is set to automatic, it would just automatically upgrade your mass versions, right?

30:42
Well, that poses a problem because you don’t want to be working on your in within your mass instance just to have it shut down all of a sudden to perform an upgrade, right with no scheduled outage window.

31:00
But so that happened to one of the dev environments.

31:04
Thankfully it didn’t cause much of the disruption, but it did take the instance version out of alignment with the with the test and the production or to be production environment, right.

31:19
So we realized that and then.

31:22
So we had to then equalize all the mass versions throughout all three environments to perform an upgrade on the other ones and then and ensure that we turn off that automatic approval, right.

31:36
So that to make sure that it is set to manual in the future and that all updates will be controlled will be performed in a controlled manner during a scheduled outage.

31:54
OK.

31:54
So we talked a little bit about production about the disaster recovery strategy.

32:01
And I wanted to just dive in a little bit more on the concept on how the actual persistent storage files is transferred over from the production cluster over to the Dr.

32:13
cluster, right?

32:15
So we’ve got, we’ve got net app right as the persistent storage.

32:22
However, net app is attached to Openshift, right?

32:29
And it’s got when, when Openshift reads it right, it’s got a certain IP that that it is registered.

32:37
And using native net app backup, you can only backup the entire storage.

32:44
You can’t target specific files and folders that you want to send over to the backup, right?

32:53
So we have to find a different way to specifically target just the attached document files and the GMs queue messages to send from 1:00 and that app persistent storage in the prod data centre to the other in the Dr.

33:10
data centre, right?

33:12
And also if we use the native backup, it would take the entire, it would take the same IP and try to register the net app IP with with that same IP, right?

33:26
Which is different, right?

33:27
So hopefully everyone can see, but in this case, in this example, the the net app IP in the prod data centre is 10.0 dot 0.1 and currently being built in the VR data centre it’s 10.0 dot 1.2, right?

33:45
So if we were to use the native backup to overwrite what we have set up in the Dr.

33:53
data center, it’s going to try to register net app persistent storage as you know the the original IP right 10.0 dot 1.2 which Openshift will not recognize.

34:05
So that would cause another problem.

34:08
So in order to mitigate that, we’ve got the persistent storage on the production data center mounted to the admin machine and it’s got a schedule R sync chron task.

34:24
And with this R sync chron task, you can target specifically what you want to sync over to your and to your destination, right?

34:38
So we specified to take the attached document files and the GMs queues and that gets goes through the admin machine in the Dr.

34:50
data center which has access to the net app persistent storage as a mount.

34:59
So when the R synchron runs, it will take the specific files from the net app from prod data center, sync it through the admin machine and then get inserted into the net app persistent storage in the doctor data center.

35:19
OK.

35:24
And then specifically for Mongo DB.

35:27
So Mongo DB is an is a no SQL database that’s been added as part of maximal application suite to take care of user information, right?

35:41
And so this the underlying data for Mongo DB is also stored in persistent storage.

35:50
So to, in order to back up those user information, what we do, it’s also, it’s similar to what was shown before, but the additional process is that there is a, an, excuse me, an, an additional command running within the admin machine to schedule a Mongo dump.

36:19
And we’re targeting the specific core and catalog files that’s in the Mongo DB files, right?

36:27
So it will have a Cron that is running this Mongo dump.

36:34
It will grab the files, store them in the admin machine and then there is the rsync Cron that then sends those essential files over into the admin machine on the Dr.

36:48
data centre.

36:50
And so this is an automatic process, right?

36:53
It’s, it’s syncing I believe every 30 minutes.

36:57
It’s overwriting the old files that it has stored in the admin centre admin machine and getting the new files right.

37:06
But it doesn’t automatically update the Mongo DB files in the Dr.

37:13
data centre that would get manually executed through a Mongo restore command during the disaster recovery process itself.

37:27
All right, So let’s say if something happens, we’ve we’ve already got production data centres completely down.

37:35
We’ve already got the latest files that are stored within the admin machine for users.

37:41
We just need to execute this Mongo DB restore Mongo restore to apply those files into the Mongo DB persistent storage in the Dr.

37:51
data center before we start up manage and all all users should then have access to the mass environment in the Dr.

38:03
data center.

38:06
OK and so one thing to note about that Dr.

38:12
we have tested that back in May we did a full disaster recovery, a drill and so we were able to fail over to our backup data center with no issues and so on to lessons learned.

38:25
So anytime you go through a big migration like this, a big project, you’re going to have lessons learned.

38:31
So one of the, the main things that I took away from this was I’m not very good at scheduling.

38:36
So we, we’ve scheduled our go live for a particular date, you know, avoiding payroll, avoiding all of the things that you would, you would think you should avoid outages and things like that.

38:49
And I don’t know why I did not remember this, but midway through we figured out when you take down Maximo, to upgrade Maximo, you also need to take down Oracle EBS, which is our ERP.

39:01
Well, the date that we had chosen was the very last day of our benefits open enrollment.

39:07
So that wasn’t very good.

39:10
But we did get final approval from everyone and we we were able to do it very quickly.

39:15
Well as far as the EBS side very quickly.

39:19
So that’s one thing.

39:21
And then testing, I cannot stress how important it is to really test in during UMP.

39:27
We found some things after the fact that I’m not sure if they just didn’t get tested well or what.

39:35
I don’t know what happened, but we found a scheduler issue which ended up being a bug that IBM has a sense fixed in the latest version.

39:45
And then training, we did some training classes around at our plants.

39:52
We’ve had great user adoption.

39:53
So we also put out some here’s what’s changing between 7612 which is what we were on to mass 8.

40:02
So we had great adoption, we.

40:05
After Go Live, we really only had maybe a handful of requests for creating desktop shortcuts.

40:12
We published the URL on our SharePoint site, so that was easy to do.

40:19
So yeah, it was pretty quiet.

40:26
And then the last thing that I want to say is Cohesive was awesome.

40:30
They gave us hyper care support, which we really didn’t end up needing because everything went so smooth.

40:38
We’re continuously trying to improve.

40:41
You know, we, I talked earlier about our mobile implementation.

40:44
We actually have UAT on that next week.

40:47
So our users are really looking forward to the mobile app.

40:51
We continuously do training.

40:53
We have great support from cohesive and then we’re looking to update to 9.1 Q one of next year.

41:01
So always something.

41:08
OK, well this marks the end of our presentation.

41:12
If Cooperative Energy Story today sparked some ideas for your own organization, don’t stop here.

41:19
But this is your chance to take the next step.

41:21
If you’re considering an upgrade to Maximal Application Suite, we’re offering a free evaluation to help you assess where you are and how we can support your journey.

41:33
Just scan the QR code to access the offer and start the conversation.

41:38
Whether you’re exploring options or ready to move forward, we’re here to help.

41:42
Thank you.

41:46
Wonderful job guys.

41:49
We have time for some questions.

41:51
I’d just like to remind you if you have any questions to please enter them into the question box located at the type top right hand corner of your screen.

42:02
But I’ll get started in a few that already came in for organizations with a small amount of concurrent users in Maximo.

42:10
Maximo, is it necessary to deploy open shift in a cluster which requires at least five services servers?

42:19
Sorry.

42:21
So yeah, I can answer that.

42:22
So yeah, I know that in, in the presentation we had gone over for cooperatives set up.

42:28
We, we deployed full 2 full clusters, right?

42:33
And, and I get that from some organizations, let’s say small organizations where they just, you know, in maximum 76, if they just had one VM running and it that, that serves up as their, you know, let’s say dev and then another VM test, another VM prod.

42:52
That’s the only three servers, right?

42:55
And to to move over to the cluster, it’s a considerable amount of additional servers, right?

43:06
What I will say is that there is actually another solution called a single node Openshift where organizations that don’t have to or are not supporting a large amount of concurrent users can take advantage of.

43:24
And single node Openshift just runs on one server and the specs for that server, if I can remember correctly it’s 16 gigs CPU16 CPU, 64 gig of RAM and 400 gigabytes of disk space and that will run Openshift.

43:45
You can install Openshift mass and manage only now.

43:51
Remember that maximum application suite is now a suite of applications, right?

43:57
So manage or what we knew as maximal is only one of the applications in the suite, right?

44:04
On top of that, you have monitor assist, health, visual inspections and a couple of others as well, right?

44:13
But for single node open shift, really it is limited to support manage as part of that suite only because of the restriction to the amount of of resources that it has, right.

44:30
And that’s OK for some organizations who only use manage right and are not looking to jump onto the other applications right away.

44:39
So certainly that’s that is an option.

44:43
And the other thing to keep in mind is that single node open shift will also only support up to 70 users concurrent maximum right?

44:54
So if if that falls within the needs of the smaller organizations, then absolutely you have the option to deploy the single node open shift and go with that route.

45:12
Perfect.

45:14
Couple of questions came in that I’m going to answer really quickly.

45:17
The recording will be provided within the next day or so once the recording is rendered and we can get the e-mail out.

45:26
So stay tuned for that.

45:27
And then if you would like a copy of this presentation, in the top right hand corner, there’s a materials tab and the PowerPoint as APDF is in there as well.

45:40
OK, back to you guys.

45:41
So questions, can you explain the reason to split prod versus non prod app points again?

45:50
Sure.

45:51
Yeah, I can take that question as well.

45:54
So there are some pros and cons to splitting it up, right.

45:58
So we chose to split it up because we wanted to ensure that production the app points for production won’t get affected by non production users, right?

46:13
So the other option is to keep it all within one single pool.

46:19
And yes, the Pro is that you you have full utilization or access to your entire pool of app points.

46:28
However, because the dev, test and production all share that single pool of app points, in a certain times like where there’s high development or maybe a lot of testing, those users that are logging into the non production environment can start consuming more app points than you have initially planned for, right?

46:57
So that could be a potential blocker for your actual production users to access the production environment, right?

47:06
So let’s say, you know, if, if during during high development time, you know, it’s there’s 10 more users logged into the non production environments than initially anticipated, then you know, it could, it could prevent 10 users from logging into production, right?

47:28
Whereas if, if it’s split, then you know for sure, OK, I’ve got a set amount for the non production environment.

47:35
Users will only consume or non production users will only consume the app points allocated in that pool.

47:43
And then the my production users, it’s safe to and and we know for sure that they would have access to this other pool of app points and that it will not get, you know, interrupted in any ways, right.

47:59
I guess downside to that is that let’s say, if, if during, you know, typical, typical times the non production environments have don’t have a lot of users logged in there, then yeah, you potentially have some app points kind of just sitting there and not being utilized where it could have been utilized for the production environment, right.

48:29
So there’s definitely pros and cons for both ways of, of, of doing it just really depends on what the what the organization chooses to go with and and you know, what are the risks involved.

48:51
Wonderful if we if we have external add-ons such as TRM rules manager, how different is the configuration for mass.

49:07
So if we’re talking about integration like an like an integration, yeah, I guess I’m not, I’m not super familiar with like how it’s integrated.

49:19
But if let’s say if it’s going through REST AP is for example, that would still carry forward to mass.

49:28
In that case, you know, it would, the only difference is that mass does not allow basic authentication anymore.

49:35
You would have to go through API keys.

49:38
So and then if it’s touching certain object structures as well, by default, mass does have security to the object structure turned on.

49:51
So you have to ensure that you that you enable the authorization to those object structures in order for it to be used.

50:01
And then you have to generate that API key using a user that does have access to those object structures in order to and then use that API key in your in your environment to to make sure that it can communicate with maximum application suite.

50:23
Great, thank you.

50:25
What information, what information is contained in the Mongo DB?

50:34
Is that information being used for app points for licensing?

50:40
Yes.

50:40
So Mongo DB will store your user information.

50:45
So specifically user, right And tied to the user is the a point allocation to each user.

50:52
So it stores both of those information and it is being accessed on logins, right?

51:00
Because the logins is is handled on at the mass level, whereas you know, in Maximo 76, it used to be just all part of handled by Maximo itself, right?

51:12
But this is kind of like you log in, you first go into the mass layer and then from there, because maximal application suite offers you a whole bunch of applications, right?

51:23
Not just manage, but you could potentially have, you know, monitor a visual inspections assist.

51:31
That’s where you, you can pick, you know, and, and mask what applications you want to access, right?

51:37
So then from that point, then you click into manage, then you log into manage, right?

51:42
But it’s because of that split that Mongo DB is required.

51:46
And, and that’s where it houses your, yeah, your user information.

51:51
It’s got your, you know, user ID, address, primary emails, telephone numbers, all of that.

52:00
And and then the yeah, again, the the app points that is allocated to that user.

52:07
That’s also that information is also stored in Mongo DB.

52:11
And that’s why it’s important, you know, to send that information in Mongo DB.

52:19
Let’s say when you’re doing a disaster recovery, right, to let’s ’cause let’s say in your production environment, you’ve been adding like, you know, 10/20/30 users throughout, you know, the months that it’s being used.

52:31
And then in your Dr.

52:33
environment, it’s like those users aren’t added there, right?

52:36
It’s not being used.

52:38
But and so in the disaster recovery scenario, you got to make sure that those user information gets sent over to the Dr.

52:45
data center so that those users can actually log in or else they won’t be able to log into the Dr.

52:50
environment because it’s it’s not in the Mongo DB in for the Dr.

52:55
environments, Right.

52:59
Thank you.

53:00
This sounded very manage focused.

53:04
Was there any planning in this move to look towards monitor health predict?

53:12
I guess I could take that.

53:14
This was just managed.

53:16
This was just lift and shift.

53:18
It was taken what we already had in the 7612 environment and moving it over into a Red Hat open shift in the new infrastructure.

53:26
We actually have had some demos with some of our internal teams on health monitor and predict and we have a lot of interest in it.

53:36
So that is something that we’ll probably be pursuing.

53:39
We have some data cleanup that we need to do, But yes, we’re very interested in that, but it was not part of this particular migration.

53:48
And and one thing I’d just like to add to that, I guess from a infrastructure perspective, right?

53:54
Because I talked about single node open shift just now.

53:57
But let’s say if you were planning to go ahead with the maximum application suite upgrade and looking ahead, seeing that, OK, if your organization does want to take advantage of the other applications on top of manage that would that kind of decision would go into consideration when you’re deploying the infrastructure for your for mass, right?

54:21
Because in this case, if you wanted to take advantage of those other applications, you need to make sure that you have a cluster that will support that down the road, right?

54:30
That is scalable.

54:32
And so in that case, deploying a traditional open shift cluster would be the way to go because you then have the option to scale up your worker notes and increase the overall cluster resource as you need, right as you’re deploying more of those suite applications.

54:55
Great.

54:57
What is the high level process of converting say an inbound table interface to mass then inbound?

55:07
So if we’re talking an I face table, really that doesn’t change from 76 to mass, right?

55:16
Like the database continues to be a stand alone database.

55:21
I mean you, I guess you could deploy a database with an Openshift if you want, really wanted to.

55:27
That’s not really the, the route that we’ve seen most, most organizations go to because Dbas still want to have direct access to their databases, right?

55:38
They don’t want to go, they don’t want to learn Openshift, they don’t want to have the access the DB through Openshift, right?

55:44
So if we’re talking about an integration through an IFACE table, you’re basically interacting with the stand alone database still right there like there’s no there’s no change to what is needed for for that interaction.

56:06
Thank you.

56:08
Once mass manage has been installed, are there any additional configuration steps that need to be executed for maximum mobile?

56:24
Melissa, is that something that you want to take a stab at since you guys are mobile, right?

56:29
So we we have, so it’s available to you as far as it’s already included in manage, but I don’t think it’s something you can just flip a switch and turn on.

56:39
There’s a, there is configuration behind behind the scenes that needs to be done.

56:44
And we actually have a cohesive team that’s helping us with that configuration.

56:49
You use something called the Maximo Application Framework, which has to be installed in order to have access to be able to manipulate the screen and add fields, take away fields, those kind of things.

57:02
It’s done through the Maximo Application Framework.

57:05
So it is definitely, there’s definitely configuration involved.

57:09
It’s not just downloaded from the App Store and you’re ready to go.

57:16
All right, I think we have time for maybe one more.

57:19
Let me just make sure.

57:24
Would having two versus single, would having two versus single node provide more robust reliability versus failing to DDR?

57:37
I, I, I’m taking it as like you would have your production environment in one single node open shift and then your Dr.

57:52
in another single node open shift.

57:56
Hopefully I read that right.

57:59
I, I would say that it’s the same because you’re still, you’re instead of an, a traditional cluster, you’re just, you’re just kind of shrinking that down into an open shift into a single node open shift instance, right.

58:15
So yeah, you would still have like your production that’s running and then your Dr.

58:24
that is kind of on standby.

58:28
So that failover process that I don’t see it changing, it should be the same.

58:36
But yeah, if I didn’t hear that or, or understand that correctly, please reach out to to me separately and we can have a more in depth discussion on that.

58:49
Yeah, Mark, if you’re, I think you’re still on, if you need more clarification, just let us know.

58:58
Well, that is, let me just take one more pass.

59:01
A lot of questions are coming in.

59:02
I just want to make sure I don’t meet any.

59:08
All right.

59:10
Oh, I see, Mark.

59:11
Hands raised.

59:12
Mark, can you just type it into the chat?

59:14
I’m so sorry.

59:23
He said all good.

59:24
So we’re good.

59:25
So I think you answered it right, Bennett.

59:27
There we go.

59:27
Thanks, Mark.

59:29
OK.

59:30
So I think that answers all of our questions.

59:33
If you have any other questions, please feel free to reach out to us.

59:38
And again, we’ll be sending out an e-mail.

59:39
But with that, we conclude our webinar.

59:42
I want to thank our speakers for joining us today as well as your attendees for listening.

59:47
We have additional information available on our website in the form of blog post, articles, case studies, name and brochures.

59:54
If you want to really reach out and discuss your strategic path forward, please reach out.

59:59
You can also reach out via our website and that’s our app.

1:00:02
Thank you so much, Melissa and Bennett, and thank you again for everybody for joining.

1:00:06
Have a great day.

1:00:08
Thank you.

Your Journey Starts Here

Discover what Cohesive can do for you

Partner with the market-leading IBM Maximo service provider and systems integrator.

Related Content

Expert-led webinar sharing proven security best practices for deploying IBM Maximo Application Suite on Red Hat OpenShift.
Learn how we helped Stena Drilling deploy IBM Maximo Application Suite across their headquarters and six vessels.