Generative AI, a subcategory of AI, has recently reached a state of maturity that clearly demonstrates the potential to augment and supplant tasks previously performed by humans. The public release of the Generative AI Chat GPT Large Language Model (LLM) in October 2022 has spurred another phase of rapid progression. Millions of human interactions are continuously tuning the models and task performance is improving on a daily basis. Generative AI alone is driving a revolution that will shape the way organizations deliver services to customers and manage operations.
Machine Learning (ML) and Artificial Intelligence (AI) capabilities have advanced substantially over the past five years. ML and AI features are being integrated into cloud services and organizations are deploying ML/AI capabilities internally on a regular basis. Organizations will need to host, operate, maintain, train and continuously tune their own models. However, the current reality is that the vast majority of AI model development projects do not make it into production.
The question many organizations are facing today is how to get started building Generative AI capabilities that are specifically tailored to their organization and business objectives. Now more than ever, a strategic approach is required to ensure AI investments are cost-efficient, have a clear value proposition and will yield a measurable Return on Investment (ROI). The good news is that CyberWorx has you covered every step of the way! We have deep expertise and extensive experience helping organizations embrace step-change technology transitions. To demonstrate our commitment to customer we service we have created a strategic whitepaper which delivers critical insights to help guide our customers navigate the Generative AI revolution.
In addition to our strategic whitepaper, the CyberWorx R&D team has been evaluating a number of open source projects as we build capability and infrastructure to help our customers embrace the AI revolution. One of our favorite projects is AnythingLLM. AnythingLLM is an open-source multi-user ChatGPT for multiple LLMs, embedders, and vector databases. AnythingLLM supports unlimited documents, messages, and users in one privacy-focused app. Our experience working with the AnythingLLM product and project has provided substantial value to our efforts to digest the rapidly evolving Generative AI ecosystem. Internally we have created an automated build process to deploy new versions of the open source project in a standalone EC2 instance which hosts ChromaDB and AnythingLLM as integrated containers.
We have also augmented the build to incorporate cyber security enhancements including: building off of the latest version of Amazon Linux, configuration of automated patching using anacron, configuration of selinux to allow nginx proxy for http/https connections, firewalld enabled and configured for http/https, integration of nginx proxy with https redirect configuration, script automated ssl certificate generation and installation, a randomly generated OTP used for initial login, and configuration of chrony time synchronization with Amazon internal time servers.
In the spirit of open source, we have hosted a free AMI version of our build on the AWS Marketplace. The build is updated periodically. Check out the latest version of the AnythingLLM Private Deployment. And please reach out if you have any questions about our AMI or if you would like a copy of our strategy whitepaper.