RHONDOS + 2 STEPS LIVE WEBINAR
All right everyone. Thank you all for joining. We're going to go ahead and get started. A quick note on some forward-looking statements. Over the course of this presentation, we may make some claims about products and technology that is forthcoming. This is a bit of a disclaimer at the front of the presentation, that what you hear today may change over the course of time.
So, thanks a lot for taking some time to spend with us this afternoon. My name is Matt Colabrese, I'm the Director of Customer Success at Rhondos. Many of the folks on the line probably know us through our exclusive distributorship as it relates to the SAP power connect for Splunk Solution. And let me tell you, we are super excited to bring another product into the fold here. 2 Steps, and in just a moment I'll be introducing Simon Trilsbach and Pierz Newton-John, who'll take you through the technology over the course of the presentation today.
This is focusing on Synthetic Monitoring, not only for SAP and such bespoke solutions like SAP GUI and Fiori browsers and things that a lot of you might be familiar with, but a variety of other technologies as well. So in the synthetic monitoring space, a lot of the time you're limited to just browser-based Selenium-oriented synthetic monitors. 2 Steps has a native Splunk solution that will allow you to design synthetic monitor tests for just about any technology, from SAP to systems that you would access through Citrix, to systems that require multi-factor authentication, to even mobile apps and devices. So, we are just enormously excited to be introducing this technology to you all today. Buckle up. I'm going to turn it over to Simon now, to go through some introductory content before we demonstrate the solutions.
Thanks very much, Matt. And first of all, thank you to Matt, Jennifer, Bran and the team at Rhondos for the opportunity. We are super excited about the partnership. And thank you everybody for spending some time with us this afternoon.
My name's Simon Trilsbach, I'm the CEO and co-founder of 2 Steps. And I'm joined by my colleague Pierz Newton-John, who is the Head of Front-End Development and will be running through the demo of the product. We're super excited to share 2 Steps with you. The agenda, really, is to give you some background as to why we built the product. What makes the product unique? And then we'll have a look at some case studies. But we want to spend a bit of time on actually demoing the product so you can have a look at what it does, why it's different, and hopefully why it's relevant to solving some of the business challenges that you may be facing in our organizations today.
Some of the macro challenges that the modern organizations face. First of all, we've got remote workers and a lot of organizations have remote workers in different office locations, it's something that we see quite often. But this has been accelerated exponentially, because of COVID. So now we've got a lot of employees that used to be in the office that are now working at home, trying to access those mission critical applications that they used to access in the office, from their desktop or laptop at home, which is causing some issues.
There is a link between productivity and revenue. And there is also a link between the speed of applications and customer experience. And customer experience is really becoming a board level concern now, because the modern consumer absolutely expects that applications work and are fast and reliable. They call that the Uber-ization of what's happened over the last five to 10 years. On top of that, there's a lot of digital transformation that's going on within organizations. So whether that's rolling out mobile applications, or moving to web based applications, or shifting infrastructure to the cloud. So there's a lot of movement that's happening. And there's still, in most organizations, most large enterprises, there's still legacy applications that do very, very important things. Think of banks, financial services, telcos, you'll still see mainframe systems there.
And IT teams are really struggling to understand the performance when they're making these changes or rolling out the applications unless they have a real world view of application performance. And that is where 2 Steps comes in. The costs are very significant when it comes to outages. These numbers to me are staggering. And the most recent data I could find was 2016, where we're looking at almost $9,000 per minute in terms of an outage. And in terms of what the effects are end user productivity, lost revenue, business disruption, are a top three.
Basically what this is saying is, the quicker you know that there is a problem, the quicker you can remediate the problem, the more cost you're going to save, the less revenue leakage that will be within your organization. And of course, the Holy Grail is predicting that there's going to be an issue before it actually happens. So the way IT operations are trying to circumvent this is through a range of different monitoring options. So we have real user monitoring, or RUM, which is really the forte of all of the application performance monitoring vendors you've probably heard of. So App Dynamics, DynaTrace, New Relic, Datadog. A lot of their marquee products, on the real user monitoring or RUM side of things.
Now, this is where you're embedding an agent or putting a script into the application, which is sometimes not possible. But even if it is, a limitation with real user monitoring is it needs network on the traffic for you to actually get the performance data. So that can be a challenge. Because it's almost too late if you get the performance data and you realize there's an issue because customers are experiencing the issue, then you're scrambling to catch up.
The other piece is the synthetic side. And as Matt mentioned, the majority of capabilities out there, they're Synthetic Monitoring solutions are based on Selenium. Selenium is designed to automate browsers. And that's about it. So if you're looking at things like client server, an SAP application, a mobile application, Citrix virtualized applications or even complex workflows, like two factor authentication, one time pins, then Selenium is just not fit for purpose. So good for browsers but doesn't really work outside of that. And then finally, you've got network monitoring. But of course, the limitation there is there's no visibility of end user experience.
Great quote from somebody who's become a great contact for us Wiley Vasquez who is one of the senior market specialists, for IT operations at Splunk. "Synthetic monitoring is one of the most powerful leading indicators of IT service health." So think of this. Coming back to what I said earlier, the quickie you know that there is something happening, that's an anomaly or erroneous. And the quicker you can start to investigate that, the quicker you can start to remediate. So that's the business that we're in.
What we've built with 2 Steps is a codeless synthetic monitoring capability. There's no agents involved. There's no scripting involved, the framework that we use to automate the workflows, is visual recognition. So we're looking at a screen, and we're telling 2 Steps to look for unique elements on that screen. And then to perform a series of actions. We can emulate how a user would typically use that application. And we call those user journeys. They're essentially workflows, but we call them user journeys.
Now it's important to know that it's not based on x, y coordinate, so the images can move over the screen and 2 Steps is going to find them. There's a lot of flexibility in terms of the weighting that you can use. But what this does is it allows us incredible flexibility in terms of the types of applications and platforms that we can monitor. Everything from mobile, to mainframe, Windows to web, Citrix to client server, there's a lot of stuff that we could do. As long as there's a user interface, and we can control it remotely, there is a fantastic chance that we can set up the automation and build the user journeys. Once we build the user journeys, then we can start to time them in a controlled environment to let you know that everything is performing as it should be.
Because there's no agents or no scripting, you can spin up these tests in minutes, not hours, using non-technical resources. So what this enables organizations to do is push the ownership of the synthetic monitoring tests back to the business stakeholders. So if there's a team that looks after an application, all of a sudden, they can start to own this piece, and kind of be masters of their own destiny, which helps the IT operations team, which helps the monitoring team move on to more high payoff activities.
And then the last thing is, this is purpose built for Splunk. All of the queries, all of the performance data, when you're building the tests, when you're looking at video replays of the application performance test, the synthetic monitoring tests, everything is in Splunk. So from woe to go, you're never outside of Splunk. And again, we'll have a look at how that looks when Pierz runs through the demo.
What I thought we would do is have a look at some of the customer challenges that we've been able to address, give you a quick high level overview of what the customer problem was. And then, I'll hand over to Pierz and he can run through the product and relate the demo to the customer problem. Allianz in the states came to us because they were looking for an application performance monitoring tool. And there had been an increased need from business stakeholders, which was overwhelming the IT operations team.
The other thing that they had with their incumbent solution was a really weak integration into Splunk, which required reformatting, resizing of data, which led to additional work. So you had a team that had a tool that was difficult to use, required technical skills, and then required additional work to actually push it into Splunk. And then you had more and more requirements coming in from the business. They were looking for a tool that was very easy to use that they could push back to the business that integrated very nicely into Splunk. So Pierz, that's your cue and I will hand it over to you.
I'm going to demonstrate how we go about recording, editing and scheduling a test. What I'm going to do is just start out with a very quick, spinning up a test in Chrome. And I'm going to be automating a chess website. You can see that I've selected the URL and 2 Steps is immediately navigated to that here on the left. And over here on the right is where we have the commands that I'm creating. So I'm going to just...
2 Steps entering my username and a password. And you'll see that as I'm clicking on these elements it's actually taking the text for me automatically to label those elements. I'm just going to enter a wait here, because I happen to know that this site takes a moment to become responsive… Coming through the opening explorer here. So this is optical character recognition. It works about 90% of the time, occasionally, you have to enter the labels manually.
There you go, I could keep going with that. But that's enough. I'll now reset that. And I will play that through. And so I don't know how long that took to do, but maybe a couple of minutes. I'm going to save that. And once that's saved, I can then go and schedule a test. I go here to the scheduler, click new schedule, the default is five minutes, I can save that. And that's it.
So that is literally how easy it is to create a test. You're just basically interacting with this test screen here in a similar way to how you would interact with the website directly. I'm now going to create another one for you. And I'm going to dive a little bit deeper into some of the functionality. In this case, I'm going to be looking at the Bureau of Meteorology website in Australia.
Okay, so the first thing I want to do is hover over a menu. I'm just going to click up here. This is a hover menu. So here are my different commands, I've got click text input, wait for image, there's an if command, this is a mouse over command that I want here. I'll give it a label to this image. Okay, so now the menu pops up. Now I want to click on this first, and what I'm going to do is Shift and drag over the text there, and you'll see how it's extracted the text from that.
Now I've got advanced menu options here. So I've got right clicks, middle clicks, double clicks, various other options, I can crop the region of the screen that I want to look at, again, to varying delays after the command and so forth. I'm just going to leave the options as they are. I'm going to go to the Melbourne radar here. Okay to the 256 kilometer radar.
This is a live view. As you can see, this is updating a little bit of cloud there. It's pretty quick, the actual update refresh rate. Now you can see that the website scroll down a little and I want to actually enter some text in here. So I'm going to need to scroll to the top. Now the way we can do that, if you need to scroll on a website, I can either use send a special key command.
Basically, I can construct any type of key command sequence. Ctrl C, Ctrl V, Control Alt Delete, I can use page up here to go to the top of the page if I wanted to do that, or I can simply drag like so. And I can... I can adjust the image that 2 Steps has selected here. I'll give this a label. So it's a drag command. So it's dragged me up to the top of the screen. Now I'm going to enter a search term here. So I select here. And then I choose click text input, and I'll type in "Melbourne."
And then I'll go, select Search. And here I've come through to the search screen. And the last thing that I would probably want to do in this test is select something on this screen to verify that I've arrived where I want to. So... Oh, it hasn't extracted that take. So I'll just call that… And I'll say wait for image rather, which means that it will wait until his image appeared on the screen. And I can specify a wait period, I'm just going to leave that, which means it will wait until the test either appears or times out. Okay, so I can save that.
Now, often, you want to divide a test up into various functional areas. And the way that we do that in 2 Steps is through what we call checkpoints. So this end point here is a checkpoint, every step has to have at least one checkpoint, which by default is called end, but you can rename that. Now in this case, I've basically done two things. I've looked at the radars, and then I've entered a search term. So I might want to separate those out in two checkpoints. I will put a checkpoint here and call that radar. And then another one here, which I'll call search, which causes search functionality.
And I can actually move this around if I misplaced that slightly, so you can just drag those around. And now I'll show you what happens when I play now. You'll see that there is a stopwatch here, and there's a stopwatch here. So as the test is playing through, I'm getting the running time both of the test as a whole and of each of these checkpoints. And this helps me to establish benchmarks for the thresholds that I'll set for a timeout and for a warning value. For example, this radar took 10 seconds, the default timeout and warning values for the checkpoint are 30 seconds and 60 seconds. You can see it took 10 seconds to run through. So I can double click that and it sets some suggested thresholds based on the runtime for that checkpoint, or I can simply enter my own values here.
And I can do the same thing for the test as a whole, I double click that to set some suggested timing values for the test as a whole. So I can save that. And now I probably want to go ahead and schedule this. So let's do that. So I click new schedule. And here I have... The default is five minutes, I can set it to whatever I want here. Now I have some advanced scheduling options. So I can schedule by the month, by the day of the week, or per day of the month. I can schedule for different hours of the day, and so forth.
For example, I just want to do weekdays... Sorry weekends, I can do that. And you'll see the description changes to say every 15 minutes on weekends, which I can modify that. Usually the automatic description is fine. And then there are some... If you have different locations, you can select those. So if you're running from different offices, you can select where you want to run the test. And then there's some advanced options, which I won't go into now. Okay, so I'll save that.
So now that we've got that scheduled, you'll be getting results inside Splunk. So we'll go to the results page, which is basically a standard... It's just a basic Splunk dashboard or standard Splunk dashboard, which has some pre-configured visualizations. So these are the application service levels. So you can see I've got four tests running. And this is running 100% over the last four hours. This particular one's running at 99%. And the other ones are at 100%.
And this one, current status shows us the most recent run status of the test. So this one here, for example, that ran 10 minutes ago took 47 seconds and ran over the warning threshold. That's why it's coming up orange. And then you get the individual test runs. I can click on one of these. And depending on the test, you may get a video. I did configure one of these to show... There you go. So if you configure it to, you can save a video of the run. And it could be based on whether it failed, for example, because it does take a bit of data storage to keep the video. So it's not within Splunk. So you're not paying for that.
But if the video fails, you can get a video, which will help you to diagnose what went wrong. You get individual checkpoint performance over time. The total performance of the test divided into the individual checkpoints, you also get the checkpoint waterfall for each individual one of the test. And you get network timings if you choose to, which is the individual resources for the website. That obviously only applies to web tests. And you also get the raw event data, which you can use that's in JSON format. So if you've got technical people, they can turn that into new spunk visualizations for you. Okay, so that's basically a run through of how it works. I'll pass it back to Simon.
Thanks, Pierz. So hopefully, with the first demo, what you've been able to see is how quick, you can set up a test. I think that the first one that Pierz run through with the chess website, 10 different steps. And that was knocked over in under two minutes. Again, with no scripting just kind of point and click with some industry instructions, to telling 2 Steps what to do when it finds the images. So very, very simple to use. Very quick to get the test spun up.
Okay, so let's have a look at a challenger bank now in Australia. So ME Bank is one of the fastest growing banks outside of the Big Four here in Australia. They're based in Melbourne. And they came to us as they had a challenge, which really presented itself because of the pandemic. And one of the points I made earlier where this isn't unique, this is happening everywhere.
And most of the bank staff were forced to work from home. And there were several mission critical applications that they needed to access to do their job, which required them logging into the platforms via Citrix. And using Okta. There was absolutely no way that they could do this with the current solutions that they had. And the challenge was twofold.
One, finding a solution that could automate a process that could handle a one-time pin for security reasons. And secondly, finding a solution that could navigate through multiple technology platforms. And in this instance, it was Chrome to access the Citrix storefront, then it was Citrix. And then it was a Windows application. And the reason that they needed to look at the performance was a number of their employees were complaining about the performance of the login process. So that's something that we took on and we were able to implement successfully at ME bank. And Pierz is just going to show you how you can set up two factor authentication or working with two factor authentication with user journeys.
Okay, so what I've got here is a test that I've recorded earlier, which goes to log into a Gmail account, which has been set up to require two factor authentication. Now, there's some back end configuration with this, which is one off stuff. It's basically pretty simple, what you need to do is have a two factor... Sorry, an SMS provider that gives you a phone number, and then configure your two factor accounts, you receive those codes. And that's pretty much and then you need to do a little bit more configuration after that, but it's pretty straightforward. I won't show you that stuff as part of the installation process. But I'll run through this test now which I've recorded up to the point where it's requesting the two factor code, and I'll show you the steps we go through to enter that code.
We're through here to the two factor step. Now I'm going to right click and I select Enter to Two FA SMS code. And now I'm going to choose the SMS provider account that I've already set up. Then I choose the phone, which is configured to extract the code from that when I click OK. Now I just have to basically wait for the code to come through. And there it is. So, 2 Steps enters that for me. Click Next.
Okay, and then I'll go to my Gmail. There you go. And then the last thing I'll probably do here is just do a wait for image to confirm that I've arrived. That's it, it's really dead simple. So long as you've got your SMS provider configured and you've got your phone number configured to extract the code, that's all there is to understand into the two factor SMS command here and it does everything else for you. Back to you Simon.
Thanks Pierz. So incredibly simple to set up. But incredibly difficult to build in the background. And we think that this piece of functionality is a bit of a game changer because of the lack of solutions that can handle two FA, one time pins out there in the marketplace. Concurrently with the rise of the requirements for Two FA. So even things like logging into Salesforce, logging into SAP at times is requiring more and more two factor authentication workflows. So if that's something that you have at your organization, then please let us know. We'd be happy to discuss further what we can do.
The final case study that we're going to talk about is a government organization, state government organization in Australia Queensland Government. We were introduced to Queensland Health through our friends at Splunk. And the problem that Queensland Health had was they had 17 different hospitals that will all accessing... The staff within those hospitals were accessing a patient administration system called Cerner that was accessed via Citrix storefront.
And again, there were lots of complaints around the performance of that application. The challenge for Queensland Health was that they were really quite blind as to one, when a complaint came in, was it everybody that was being affected? Was it the individual that was being impacted? Was it the hospital that the individual was at? Was it a cluster of hospitals, they really didn't know. And so that created a whole set of challenges when they started to, one, categorize the incident and prioritize the incident. Because clearly, if everybody's being impacted, it becomes a P1. If it's just the individual, then clearly it's not going to be such a priority.
But they were shooting in the dark. So what we did was we spoke to them, we worked with Queensland Health, and we implemented a monitoring node at each of the 17 hospitals. And were able to demonstrate that we could navigate through the Citrix storefront, login to Cerner, the patient administration system through a dummy user account, and then search for a dummy patient record, and then do a couple of other bits and pieces. And then time that. And push the performance data back into one central Splunk instance that was sitting at head office.
So now you had 17 hospitals with the 2 Steps robots, performing this workflow again in a controlled environment and pushing the performance data from those 17 hospitals back to the central instance. So when there was a slowdown or when there was an anomaly or something that was happening that was erroneous, they were very quickly able to tell whether it was affecting all of the sites, whether it was a cluster of the sites, whether it was one. And that allowed them to prioritize, categorize, and get on top of it, start to investigate the right pieces of IT infrastructure that may be causing the problem. Pierz over to you.
Okay, so I'm just going to basically show how we can run a Citrix application. I've already set it up to login. And I'm just going to automate the calculator app just to give you an idea of... Just to show you basically, how simple it is and how it uses the same type of interface that we saw before. Okay, there we go.
While you're doing that Pierz, one thing I'll just call out, on the screen, you can see the multiple spinning wheels, when Pierz is doing some of the actions. That is a Zoom broadcast issue. You don't actually see that. I mean, Pierz is not seeing those.
It looks a little strange. But that's to do with the Zoom broadcast rather than the product.
The Citrix use case has been extremely useful for us we've built the business out, it's a requirement that comes up time and time again. So again, if you have virtualized applications, via Citrix or any of the other providers, and you're looking to get coverage around what performance looks like, then please let us know. And we can work with you. And more than happy to look at proof of concept. So you can make sure it does what it says on the tin.
What we're looking at here is synthetic monitoring done in a different way. So really pre-2 Steps synthetic monitoring has lacked innovation. There haven’t really been any major changes to synthetic monitoring since 2009, when WebDriver and Selenium merged to create Selenium 2.0. What we're doing is really kind of shaking that up. Because we want to be able to provide IT teams with the quickest signals that something is performing incorrectly across more platforms than just web browsers.
IT operations teams want to know that there's an issue for the phone rings, they want to be able to fix problems before the users arrive in the morning, or before they launch a new application or a new website or a new mobile app. And they want to be aware that the infrastructure issues are actually impacting user experience. Because if you think about it, just from a Splunk perspective, if you're monitoring all of your infrastructure assets, whether that's a network, a server, CPU. Whatever that kind of component is, if it's showing if things are going in the red, then that's good to know. But what you really want to know is, "Okay, well is that causing a problem to my users?" And if it is, then I need to escalate this, I need to prioritize this. And if it's not, then well, maybe it needs to be a lower priority.
Some of the common issues that we solve, and we've spoken a lot about this, I'll whiz through these. Remote workers, slow performance in Citrix or VPN, Windows on mainframe applications where there's no coverage at the moment, workflows to incorporate some kind of multi factor authentication. Organizations that want to build tests quickly and pass the ownership back to the business stakeholders, or the current monitoring tools, they only have basic integrations with Splunk.
And what I would say is, as a bit of a sidebar here, every single one of the customers that we work with, have an incumbent, they have a New Relic, they have a DynaTrace, they have an AppDdynamics, they have a DataDog position. And our positioning is not to displace those capabilities, because they're very good at what they do on the real user monitoring side. And if it's just kind of Synthetic Monitoring for browsers, then that's fine as well. Our play is to come in and help you get application assurance around the applications or the platforms where you don't have coverage. Our play is to give you a solution that has been purpose built for Splunk. And allow you to build those tests really, really quickly.
So talking about how we integrate with Splunk, I've just got a few slides to talk about how this now starts to flow into an ITSI world, and an AI-Ops type use case. So here we've got a very, very basic slide where on the top, you can see the end user experience, which is 2 Steps. And on the bottom, you can see memory utilization, which is data that's being captured by Splunk. And very quickly, you can see a correlation between memory or CPU leakage, which is impacting the checkpoint performance on the top.
If we look at ITSI for a moment, this is how synthetic monitoring can start to provide a really powerful KPI when it comes to trying to predict when things may fall over. And when you're thinking about prediction, to train models, you need historical data. You need historical data to train those models to then say there is a high propensity that something is going to happen because I've seen it happen previously. So what we're looking at here is on the top, we're looking at storefront response time, which is the 2 Steps data that we're pushing into ITSI as a KPI. And then on the bottom, we can see that the Active Directory is one of the components that Splunk is monitoring, that's having a problem. And you can see that on the tree on the left hand side as well.
So when we start to dig into this further, we can see okay, the storefront login time started to have a problem at 6:00 PM. Let's take the time back further and see what components started to be impacted before that. We can see the active directory, which is the authentication response. We've already understood that active directory is linked to the storefront because these are the assets that we've got in ITSI. They are intrinsically linked. So we can see the Active Directory started having a problem around 2:00. And then we can see even before that the disk I/O read ops was having the problem around 1:00.
What we can see here is that was the component that started to act in a not normal way, which led to the storefront login issues which happened six hours later. So here is an example of how you can start to get ahead of this. And building this into a model, it would start to alert you and say, "I've seen this problem happen over and over again, when disk I/OI read ops starts to have an issue. Then six hours later there's a problem with the storefront login." So you can get on to it before it actually impacts the users.
Some of the key takeaways. Rapid time to value, installation in under two hours, user journey built in minutes, immediate benefits where you can start to baseline and benchmark performance, start to get your SLAs and then get that regular heartbeat on application performance. As I've just mentioned, a really valuable KPI for ITSI if you're using that component of Splunk. And it's flexible and agile because we're not using Selenium. We completely disrupted that, using visual recognition as the backbone for the automation, which allows us to work from mobile mainframe, windows, web, using non-technical resources.
Thank you very much for your attention. Open up for questions. Many thanks to Pierz for running through the demo. And again, to the Rhondos team for allowing us this opportunity to show you 2 Steps. I hope it's been useful.
That was excellent. Thank you so much, Simon and Pierz for that demonstration and overview of the product. We'll hold on here for just a few minutes and see if any of the attendees today have some questions on the functionality, the product itself or how they might be able to get started evaluating this technology. Let's hang on for just a few minutes and see if we have some questions come in.
There was one question that was emailed over by Paul. And Paul is asking, "Can 2 Steps also check the SSL expiry date of a website?"
It's not really designed for that. It's designed for visual recognition. So we can't really probe into the causes of like specific steps are failing. So you need to explore that you need to kind of probe into that deeply yourself, but not immediately, no.
And, Paul, maybe some additional context around your question would be beneficial to the team as well. Are you looking to get a preliminary alert when the website is not reachable, or experiences some kind of connection issue? Potentially identifying those things in advance. Is this related to an SAP context or something otherwise? Let us know if you have any additional detail that you'd like to qualify that question with?
The context is that from a historical basis, having done standard is sort of last gen, look up a website, look for a piece of text, see if the text is there, that kind of thing. One of the significant number of areas that we've turned up is that a lot of the time it turns out that some of the problems we have is that the SSL's certificate expiry date. And then there's very little monitoring capabilities out there that pick that up. And so given that 2 Steps had that capability of visit the website, that there you would see the certificate expiry would be something potentially visible along the way, which I'm guessing would be possible through the image recognition.
Yeah, fantastic answer. And as you say, I could probably even do it now, just based on the recognition capability that when a certificate has a problem, the icon does change to say that the certificate's expired. So I could do it, not necessarily the nicest of solutions, but I could do it. But yeah, I'm happy with [crosstalk 00:50:47]-
So if you have some text on the page, or if you have to add some text or an image that you can recognize, then yes you could identify that.
Yeah, because I can specify the specific browser can't I?
Yep. So yeah, in which case I can get an image that will tell me that problem. So yes, I can do it. So yes, so that's not my ideal use case. It was just one of the things that... Sort of like cherry on top of the cake kind of capability.
And I think to your point, Paul, there is maybe a common image, depending on the web browser, that will show up in the case of an expired certificate. Like in Chrome, for example, I know that there's a little box with like an exclamation point in the triangle that says not secure whenever the SSL certificate expires. Theoretically, you could build a test that every time it fails, means it didn't find that the certificate was expired, and every time it completes successfully means that it did or vice versa. So there are a few options to potentially do that with some of the functionality or image based recognition that comes out of the box with the product.
And Pierz, would like using the if statements here be relevant. Because when you're building those workflows, those user journeys, one of the things that we didn't show is the ability to introduce if statements. So if you did see the warning sign, then perform a set of actions. And if you don't then perform a secondary set of actions.
Yes, well, I'm actually with that issue, which happened with Chrome where we saw the thing coming up saying "You've got updates," for example. If I'd had time what the way to handle that would be to go and create an if statement at that point and say, "If this image appears, then click the button that says okay." And then you could proceed with the rest of the test. And so the same type of functionality could be applied to this particular use case with SSL certificates to say, "If this image appears then proceed in this particular way."
And one of the things that we're adding in with version 5 is the ability to issue a particular type of error if you see a particular image, for example. So that would be a perfect example. And you could say, if you see this SSL type problem, then fail with this particular error code. So that the chain immediately knows what the problem was without even having to replay the test video in the Splunk results.
Any other questions?
Yeah, someone's raising their hand, there. Paul.
Yeah. I've used DynaTrace and seen what it can do. But now that I've seen 2 Steps I'm a lot more excited. From a location perspective, if I'm wanting to... Like, in order to determine whether I've got an application issue or potentially a network issue, I might want Ballarat around or Horsham or Warrnambool, or a different state. Is there any kind of capabilities such as if I've got a PC or something in each locations, there's...
Yep. Yes, we support the ability to run individual scheduled tests from different locations. So long as your back-end is set up to recognize with those locations, then we can run the test at specific locations. And you also have the ability to have variables. So for example, if you've got different logins that are required at different locations, you can set variables on each of the individual scheduled runs to input different credentials at the login stage, for example. So yes, in short, yeah.
Yeah, the way that it's implemented is, there are two components to 2 Steps. The first is the front end, which is an application that lives on Splunkbase, which is certified for Splunk Cloud and Splunk Enterprise. So if it's on prem or cloud, we can work with either. And the front end is basically what you saw Pierz driving. So that's pretty much the kind of the user interface. The back end component just sits on a Linux virtual machine, it's a set of micro-services that communicates to the front end through a message bus. And super lightweight, can just sit on a Linux VM at a physical location, or we can host in AWS. But going back to the Queensland Health use case, that was a physical Linux virtual machine that was installed at all of the different hospital locations, which were different geographical locations.
So absolutely. Bread and butter in terms of what we do.
Absolutely fantastic. That's all I can ask for. Thank you.
no problem in terms of the way that we typically engage without our customers. As I mentioned, we find that the first step of the process after a kind of discovery session to understand the requirements is to work in partnership with the organization to set up a proof of concept. Typically runs between four to eight weeks. That's why we're so excited about the partnership we have with Rhondos in North America. Because they're going to be our experts on the ground with 2 Steps supporting. So if you've seen anything that is of interest, you feel there's a pain point within your organization that we may be able to address, then we are very happy to work in partnership with you to run a proof of concept to demonstrate the capability and make sure it's fit for purpose.
All right, well, we are right at the top of our time. So thank you all again for spending your afternoon or morning with us today. We appreciate it very much. If you have any questions or concerns, you can definitely reach out to Simon at 2Steps.io or myself at info at rhondos.com. We would be more than happy to answer any questions you have or provide you some additional demonstrations of the software. Thanks again and have a great rest of your day.
Thank you, everybody. Thanks for your time and your attention. Have a great day.