profile image

Don Duval@evolvedcivilian622

0/200

Learn more about ranks

Profile rank: lablab No-rank

Next rank: lablab Apprentice

6

Events attended

2

Submissions made

Hacker // Hustler

United States

3 years of experience

About me

Resumes and Biographies? Pfft, who needs 'em? In this AI aeon, they're an OPSEC fail and should be considered a red flag of institutional co-dependency. Anyways, I started out as a NY graffiti writer and then socially engineered my way into Columbia University through a Barnard research backdoor without a high school diploma. I've developed a range of skills from pen testing to manufacturing/sourcing, all with the goal of training my people in the use of Agents and on the side developing serious games using "persuasive technology" concepts to heal my community. Basically, I'm the American Black guy from Terminator 2, but way more human and charismatic. I think the problem with my developers is their lives have revolved around just the technology and not life. This is where I think I am better suited to be an active builder of AI products than a traditional developer. I lived a lot of life in the most diverse city in the World, New York City.

Socials

๐Ÿค“ Submissions

    Submission image
    Hackathon link

    Tactical AI

    Our project was to run a low cost simultaneous series of agents that interact with the same environmental conditions and collaborate on the same output documents. We initially had the ambition to run 10 Upper Level Suite agents (long term themes/short term goals: 3 year to 3 month); 30 supporting agents (2 week check ups; daily repeat functions) but we were unable to get enough domain knowledge sets for this particular project. So we ran with the domain knowledge we had and eventually decided on testing a "Human - Machine Teaming" model that would be designed to help humans trust the power of the technology without it seeming threatening by identifying the sources of our domain knowledge sets, setting the map of their agenda, storing the information of their agenda in pinecone and synthesizing that with domain knowledge specific to the role. Also, Super AGI has a tool that allows for document modification and that allowed us to have the agents interact on the same document from multiple perspectives. The end result was actionable data with very few errors. The amount of work done in the time to actually set up the agents, set the map of the project and process goals for each agent was nothing compared to the amount of work we received from the agents. It was about 40 hours of labor for 4 people produced in 1 run which you can see the outputs in Github. Anyways, thank you for hosting the space. Be well.

    Submission image
    Hackathon link

    ELIZA EVOL INSTRUCT - Fine-Tuning

    We attempted to instill the deterministic, rule-based reasoning found in ELIZA into a more advanced, probabilistic model like an LLM. This serves a dual purpose: To introduce a controlled variable in the form of ELIZA's deterministic logic into the more "fuzzy" neural network-based systems. To create a synthetic dataset that can be used for various Natural Language Processing (NLP) tasks, beyond fine-tuning the LLM. [ https://huggingface.co/datasets/MIND-INTERFACES/ELIZA-EVOL-INSTRUCT ] [ https://www.kaggle.com/code/wjburns/pippa-filter/ ] ELIZA Implementation: We implemented the script meticulously retaining its original transformational grammar and keyword matching techniques. Synthetic Data Generation: ELIZA then generated dialogues based on a seed dataset. These dialogues simulated both sides of a conversation and were structured to include the reasoning steps ELIZA took to arrive at its responses. Fine-tuning: This synthetic dataset was then used to fine-tune the LLM. The LLM learned not just the structure of human-like responses but also the deterministic logic that went into crafting those responses. Validation: We subjected the fine-tuned LLM to a series of tests to ensure it had successfully integrated ELIZA's deterministic logic while retaining its ability to generate human-like text. Challenges Dataset Imbalance: During the process, we encountered issues related to data imbalance. Certain ELIZA responses occurred more frequently in the synthetic dataset, risking undue bias. We managed this through rigorous data preprocessing. Complexity Management: Handling two very different types of language modelsโ€”rule-based and neural network-basedโ€”posed its unique set of challenges. Significance This project offers insights into how the strength of classic models like ELIZA can be combined with modern neural network-based systems to produce a model that is both logically rigorous and contextually aware.

๐Ÿ‘Œ Attended Hackathons

    Submission image

    Autonomous Agents Hackathon

    ๐Ÿ—๏ธ Build projects with Autonomous Agents, using cutting-edge frameworks like SuperAGI, AutoGPT, BabyAGI, Langchain, and more! ๐Ÿ† Register now and stand a chance to win up to $10,000 and a place on the SuperAGI team. ๐Ÿ 3-days to complete your solution!

    Submission image

    Llama 2 Hackathon with Clarifai

    โŒš 3-day AI Hackathon ๐Ÿš€ Compete to build an AI app powered by Llama 2 ๐Ÿ’ก Learn how to extend your capabilities with Clarifai! ๐ŸŽ“ Mentors are available to support you during your creative journey

    Submission image

    AutoGPT Arena Hacks

    ๐Ÿค– You have 3 weeks to build. Join in at any point! ๐ŸŒ Connect with a global tech community. ๐ŸŽ‰ Win from a $30,000 cash prize pool. ๐Ÿ’พ The winning agent will become the AutoGPT in the 150000 star repository! ๐Ÿš€ You may continue your startup journey after the Hackathon!

๐Ÿ“ Certificates