replit Applications

Browse applications built on replit technology. Explore PoC and MVP applications created by our community and discover innovative use cases for replit technology.

WebML Assist

Elevate the realm of machine learning with "WebML Assist." This innovative project integrates the power of WebGPU and the capabilities of the "BabyAGI" framework to offer a seamless, high-speed experience in machine learning tasks. "WebML Assist" empowers users to build, train, and deploy AI models effortlessly, leveraging the parallel processing of GPUs for accelerated training. The platform intuitively guides users through data preprocessing, model architecture selection, and hyperparameter tuning, all while harnessing the performance boost of WebGPU. Experience the future of efficient and rapid machine learning with "WebML Assist." Technologies Used: WebGPU OpenAI APIs (GPT-3.5, GPT-4) BabyAGI Pinecone API (for task management) FineTuner.ai (for no-code AI components) Python (for backend) Redis (for data caching) Qdrant (for efficient vector similarity search) Generative Agents (for simulating human behavior). AWS SageMaker (for developing machine learning models quickly and easily build, train, and deploy). Reinforcement learning (is an area of machine learning concerned with how intelligent agents). Categories: Machine Learning AI-Assisted Task Management Benefits: "WebML Assist" brings together the capabilities of WebGPU and AI frameworks like "BabyAGI" to provide an all-encompassing solution for ML enthusiasts. Users can seamlessly transition from data preprocessing to model deployment while harnessing the GPU's power for faster training. The incorporation of AI agents ensures intelligent suggestions and efficient task management. By integrating AI, GPU acceleration, and user-friendly interfaces, "WebML Assist" empowers both novice and experienced ML practitioners to unlock the true potential of their projects, transforming the way AI models are built, trained, and deployed.

GPU Titans
medal
replit
application badge
WebGPUOpenAIBabyAGIFineTuner.aiGPT-3.5RedisQdrantGenerative AgentsAWS SageMakerReinforcement Learning

The Voich

"The Voich" is a cutting-edge technology aiming at making book-reading and story telling easier . Now , you can hear a book while you work , play or just relax on your couch. With the power of Eleven Labs API , its now tremendously easy to listen to a book , ensuring that the speech is not robotic. This technology can be a favorite tool for audience of all age groups as you just have to upload a book that's all! The programming language used to build this project is Python and Streamlit library in particular.One of the main advantages of Streamlit is its ease of use. It provides a simple API that enables users to create intuitive and interactive applications with just a few lines of code. This makes it an ideal tool for small data apps or for prototyping larger apps. Streamlit also comes with a range of pre-built components, such as charts and widgets, that can be easily customized to suit your needs. This makes it easy to add functionality to your app without having to write complex code from scratch. I like how straightforward it is to not only build a basic data app for your own analyses but also the streamlined (pun intended) deployment process for getting it in the view of your team or a wider audience. There is also an expanding library of additional third-party components which allows for further extending the features of Streamlit. For example, the “Annotated Text” component is a great addition to an NLP app, whilst being able to use Folium is ideal if you are looking to do geospatial analysis. Eleven Labs API is a cutting-edge solution that enables the generation of high-quality voice overs through artificial intelligence. By leveraging powerful machine learning models, the API can convert text into natural-sounding speech. The technology behind Eleven Labs API ensures that the generated voice overs are clear, expressive, and suitable for a wide range of applications.

The Codestars
replit
application badge
ElevenLabs

Captionize

Captionize is a cutting-edge AI solution that automates the generation of video descriptions, empowering content creators on YouTube to enhance their productivity, expand their reach, and unlock new revenue opportunities. By harnessing the power of artificial intelligence, Captionize streamlines the creation of video descriptions, saving creators valuable time and providing them with a competitive edge in the digital landscape. YouTube content creators often struggle with crafting engaging video descriptions, limiting their ability to focus on quality content and channel growth. Manual creation is time-consuming and can result in inconsistent or subpar descriptions that hinder outreach efforts and reduce audience discovery. Leveraging advanced AI algorithms, Captionize automatically generates compelling video descriptions. By analyzing the transcript of the video, Captionize creates informative and engaging descriptions tailored to maximize SEO performance, ensuring higher search rankings, increased organic traffic, and improved visibility on YouTube. Captionize presents a compelling business opportunity for both the product and its users. By saving time and offering unique benefits, Captionize is poised to capture a significant market share, providing substantial profits and success to content creators in the growing industry. In conclusion, Captionize revolutionizes video descriptions for YouTube content creators, offering a time-saving, AI-driven solution that optimizes SEO, expands reach, and unlocks new revenue opportunities. With its unique features and benefits, Captionize is well-positioned to thrive in the content creation market, delivering significant profits and success for both the product and its users.

The Vertex Titans
replit
application badge
Generative AgentsPaLMModel GardenText Generation Web UI

Prompt Consultant

In the rapidly progressing world of large language models (LLMs) like GPT-4, the art of crafting effective prompts is crucial for harnessing their potential. However, the dynamic nature of LLM research presents a challenge: keeping up with the continuous influx of new prompting techniques. This is where our project, the "Prompt Consultant", steps in. The Prompt Consultant aims to guide users in generating more effective prompts, not by having them chase the latest research, but by leveraging the power of the LLMs themselves. We exploit the LLM’s capacity to assimilate the best prompting resources and provide insights for improving prompts. The challenge is that LLMs, due to their static training data, are not aware of the latest prompting tricks. Our solution is to use in-context learning, incorporating the most recent prompting resources directly into the prompt. Anthropic's long-context API is a critical component of this endeavor. It's impractical to train a new model every time a new prompting method emerges, and the versatility of user queries makes common vector search methods insufficient. The long-context API allows us to include extensive relevant prompting information directly in the context. Our proof-of-concept demo uses the latest resources from learnprompting.org, embedding them in the model’s context. Users can then consult our bot, implemented on Anthropic’s Claude-v1.3-100k model, to enhance their prompts. Our preliminary results show promise, indicating LLMs' potential to stay in step with rapid advancements in their field. In essence, the Prompt Consultant bridges the gap between the rapid progression of LLM research and practical, effective usage of these models. By leveraging the LLMs themselves, we aim to make these technologies more accessible, democratizing the benefits of AI research. Our project foresees a future where anyone, regardless of their expertise, can generate high-quality outputs from these models through optimized prompting.

long long int
replit
application badge
Anthropic ClaudeGPT-4

Maverick AI

Maverick REACT offers artificial intelligence integration for emergency situations. Our service uses AI with the necessary event information provided by government officials and acts as an assistant to provide key protocols and information to citizens. The AI service is accessed via SMS or web portal, offering a solution without internet. How does our service work? When an emergency situation occurs, such as a flood, fire or earthquake, our service sends an SMS message or makes a voice call to numbers registered in a database or the citizen can contact a number provided by the authorities. The message or call contains information about the type and severity of the emergency, preventive measures that should be taken and resources available in the area. The user can respond to the message or call with specific questions about their personal situation or request additional help. Our service uses AI algorithms to process responses and offer personalized and updated advice. REACT has several advantages over traditional emergency alert and response systems. Firstly, it does not depend on the internet, which means it can function even when there are power outages or problems with mobile networks. Secondly, REACT service is interactive and adaptable to the individual needs of each user. Thirdly, it uses reliable and verified sources of information provided by the government or other authorized organizations. And finally REACT is fast and efficient in sending and receiving large-scale messages or calls. Our goal is to contribute to creating a safer and more resilient world in the face of emergency situations through innovative and intelligent use of technology. We believe that our service can save lives and reduce suffering caused by disasters. If you want to know more about our service or how to register for it, contact us. We are Maverick AI.

MaverickAI
replit
application badge
CohereQdrant

Aicademy

Aicademy is a cutting-edge, personalized online learning platform that uses AI to create immersive courses ‎with stunning images. With quizzes, summaries, and an interactive chat box, Aicademy delivers an ‎exceptional, engaging learning experience.‎ Aicademy is a revolutionary AI model that creates a fully personalized course for ‎each user based on their unique question of interest. Not only does Aicademy generate high-quality text-‎based content, it also uses DALL-E-2 to create stunning and immersive images that enrich the learning ‎experience‏.‏ But that's not all! Aicademy also includes a quiz feature that enables users to test their knowledge and ‎track their progress. And for those who want a quick summary of the course, Aicademy provides an easily ‎digestible synopsis that ensures maximum comprehension‏.‏ The interactive chatbox feature is where Aicademy really shines. Users can ask questions and get ‎immediate answers, allowing for a truly interactive learning experience. Whether you're struggling with a ‎concept or looking for additional information, Aicademy's chatbox is always available to help‏.‏ In short, Aicademy is the ultimate tool for those who are serious about online learning. With its ‎personalized course creation, immersive images, quiz feature, and chat box, Aicademy makes learning more ‎engaging, interactive, and effective than ever before.‎

The Cyber Savvy Ninjas
replit
application badge
ChatGPTDALL-E-2

scripttwolife

Scrip2life is a new tool that utilises Co:here’s Large language model to save time by breaking down movie scripts and generating summaries, character traits, and backstories for actors to use as inspiration while preparing for their upcoming auditions. Our goal is to improve their odds of finding and landing suitable and inspiring roles to portray stories for the viewers. We achieve this by streamlining the script comprehension using ai tools, so that actors focus on always standing out with the depth of understanding and world-building they manage to portray through the limited amount of preparation time before an audition or role. Our solution is validated by Deepmind’s discussions with industry Professionals evaluating their co-writing system, Dramatron. The writers expressed they would rather use the system for “world building,” for exploring alternative stories by changing characters or plot elements, and for creative idea generation than to write a full play. We focused on building scrip2life based on this market validation with the additional advantage that it is accessible to everyone, not just theatre professionals. Our MVP targets budding ad working actors who are looking for ways to save time while applying to hundreds of casting calls throughout the year. We additionally provide inspiration to be immersed into their characters and script for upcoming auditions and roles. Scrip2life was written collaboratively in Replit IDE. The frontend is written in html, css and js. We used flask to bring the code to life, enabling the calls to the cohere api. We prioritised the co:here generate api due to the creative nature of Scrip2life.

we-r-artiste
replit
application badge

Project Eval

Eval aims to address the problem of subjectively evaluating test answers. Traditionally, this task has been carried out manually by human graders, which can be time-consuming and prone to bias. To address this issue, the project utilizes Cohere powered APIs to automate the evaluation process. The use of Cohere APIs allows for the integration of advanced natural language processing techniques, enabling the system to accurately understand and analyze the content of test answers. The custom model built upon these APIs then scores the answers based on suitable metrics, which can be tailored to the specific requirements of the test or assessment. One potential application of this technology is in the field of education, where it could be used to grade assignments or exams in a more efficient and unbiased manner. It could also be utilized in professional settings for evaluating job applications or performance evaluations. In addition to increasing efficiency and reducing bias, the use of automated evaluation techniques has the potential to provide more consistent and reliable scoring. This can help to ensure that test-takers receive fair and accurate assessments of their knowledge and skills. The model for the same was evaluated based on 4 major metrics: - Semantic Search: this is the primary scoring strategy of Eval. It is used to semantically understand the answer given and evaluate based on content rather than simply scoring based on textual similarities. Cohere Embed was used to generate embeddings for 5 suggested answers for the question and the answer to be checked. Then we find the distance from the nearest neighbor out of the 5 suggestions and the answer. This distance is used to grade the answer. - Duplication Check: partially correct answers with duplication of text tended to get higher similarity scores compared to the ones without duplication. To stop students from using this exploit to gain extra marks, a duplication checker was implemented based on Jaccard-Similarity between sentences within the answer. - Grammar Check: this strategy aims to check the grammar of the answer and assign a score based on the number of grammatical errors. We used Cohere Generate endpoint to generate a grammatically correct version of the answer, then check for cosine similarity of the generated version with original version to check if the original version was grammatically correct. - Toxicity Check: this aims to detect for toxic content in the answer and penalize an answer if it is toxic. We trained a custom classification model on Cohere using the Social Media Toxicity Dataset by SurgeAI which gave a 98% precision on the test split. We also implemented a Custom Checks which allows users to give different weights to each of the three different metrics based on how important they are for the evaluation of the answer. This allows for a more personalized evaluation of the answer. We built our custom model into a Flask-based REST API server deployed on Replit to streamline usage and allow people to access the full-functionality of the model. We also built a highly interactive UI that allows for users to easily interact with the API and evaluate their answers as well as submit questions.

chAI
replit
application badge
Cohere

Phoenix Whisper

According to research made by J. Birulés-Muntané1 and S. Soto-Faraco (10.1371/journal.pone.0158409), watching movies with subtitles can help us learn a new language more effectively. However, the traditional way of showing subtitles in YouTube or Netflix does not provide us the best way to check the meaning of new vocabulary nor understand complex slang and abbreviation. Therefore, we found out that if we display dual subtitles (the original subtitle of the video and the translated one), the learning curve immediately improves. In research conducted in Japan, the authors concluded that the participants who viewed the episode with dual subtitles did significantly better (http://callej.org/journal/22-3/Dizon-Thanyawatpokin2021.pdf). After understanding both the problem and the solution, we decided to create a platform for learning new languages with dual active transcripts. When you enter a YouTube URL or upload an MP4 file in our web application, the app will produce a web page where you can view the video and have a transcript running next to it in two different languages. We have accomplished this goal and successfully integrated OpenAI Whisper, GPT and Facebook's language model for the backend of the app. At first, we use Streamlit for the app, but it does not provide a transcript that automatically move with the audio timeline, also Streamlit does not give us the ability to design the user interface, so we create our own full stack application using Bootstrap, Flask, HTML, CSS and Javascript. Our business model is subscription-based and/or one-time purchase based on the usage. Our app isn’t just for language learners. It can also be used for writers, singers, YouTubers, or anyone who would like to make their content reach out to more people by adding different languages to their videos/audios. Due to the limitation of free hosting plan, we could not deploy the app on cloud for now but we have a simple website that you can have a quick look at what we are creating (https://phoenixwhisper.onrender.com/success/BzKtI9OfEpk/en).

Phoenix
replit
application badge
GPT-3CodexWhisper