Concurrent User at a Time

There are two points businesses need to realize before deploying a bot:

1.          As the bot is expected to improve your customer experience in giving fast service, time is one of the metrics to measure the effectiveness and quality of itself.

2.         Once you release it publicly, you’ll find a massive amount of user coming to your bot at once.

Both points have correlative evidence you need to consider; there is an effect between the scoop of the audience to the time your bot took to answer. This is what we called the concurrent user.

However, the audience size is not a single variable to affect time, the other variable comes from this question; will then, your bot be capable enough to accept many questions at once and answer in time? The answer is yes IF you are willing to prepare the capacity needed. 3Dolphins based chatbot able to answer questions less than a second, yet although, note that the idea of owning a chatbot comes with the consequences of how this would likely be when used by masses of the audience at once, and it requires a stable infrastructure to facilitate.  The right fit of infrastructure mediates the correlation in between them.

Indeed, we will never know when enough is enough until you have the capacity planning to measure significant need. So, back to square one, decide how big and deep the role of your bot to then be reflected your infrastructure, whether it needs an adjustment or not. Previously, we blogged about supervised learning, a process to ease you parse the tangled data of your company to the public-worthy kind of information. Supervised learning can help you measure the capacity planning you need – here, it’s like fulfilling multidepartment vision in building the ideal bot for your company.

Capacity planning lets you sinking in the main goals, audience, and role of the bot to be. As a result, capacity planning made out two prominent points regarding concurrent user at a time:

1.          Knowledge

A bigger amount of knowledge demands a sufficient server memory, but if you only need a small amount of knowledge, perhaps your current server memory might back it up. The consideration implies on how big the scoop and audience approached to the bot.

On supervised learning, remember the principle to parse the tangled information, so make sure you put the knowledge accordingly. A messy knowledge left the bot process the information longer than expected, as it finds difficult to pick the answer all over the place.

2.         Infrastructures (Server memory, channel, network)

On previous point above, we talked a glimpse about how knowledge affects server memory. Meanwhile, channel relates to your commitment to approaching the customer’s touchpoints; the matter is if you need a bot in every social media you have. The more channel used to have the bot, possibly the bigger infrastructure capacity you need to prepare.

To complete your approach in approaching customer’s touchpoints, 3Dolphins based bot provides 17 channels to be connected with.

Facebook (Messenger & Comment) 3Dolphins Live Chat Ecentrix
Twitter (Mention & DM) Twilio SMS Skype
Line YouTube (Comment) Instagram (Comment)
Telegram WhatsApp MS Teams
Email Cisco Finesse Google Play Store
Apple Store Generic Adaptor

About network… you know how this would matter a lot, right? Make sure you have the right bandwidth to bear with your need.

Deploying bot is not a mere procurement, but rather building an alive machine with given liability. Make sure the bot existence fulfils the need of improving customer experience in any scope you decide, consistently aware of the possible use-cases of the concurrent user, aligned with the infrastructure features you have or you might develop in the future endeavour.

To know more about the concurrent user at a time, you can reach our sales or drop any inquiries at [email protected]

Share :
Facebook
Twitter
LinkedIn
Related Post