most of these were full end-to-end apps that were deployed, had a signup flow, and database. some were just prototypes or for fun.

🚧 blocksmith ai technical writeup (WIP) 🚧
this is the “real” ai product i mentioned in my home page intro. by real, i mean it’s more than just a gpt wrapper. it’s a webapp that let’s you create 3d block models (think minecraft) from a prompt. it also has the ability to texture them using the classic pixel art look that minecraft is known for.
you can actually visit the site and use the tool for free at blocksmithai.com. i created it to address major friction in the development of games on the hytopia game platform (think minecraft-meets-roblox), and currently have a few paying customers (including a team account using it to create models for their games). i’ve also had probably close to, if not more than, 100 free users over the last several months without putting any real effort into marketing because i was focused on building. i believe marketing gets easier once you know your product is amazing, and it’s almost there.
i’ll provide a technical writeup sharing some of the details of the application, experiments i ran, dead ends i ran into, etc. but going into this, i had zero experience in 3d, and zero experience with minecraft (aside from knowing that it existed, and that everything is blocky and pixelated).
but no other ai model, platform, or tool replicates what blocksmith can do. a big part of the reason is that the entire 3d ai space is focused on high-poly, watertight mesh generation. nothing out there can create a model made of multiple, perfect cuboidal parts, and then generate a pixelated texture for it. my model generation pipeline uses no outside dependencies like blockbench or blender to build the model, and uses mv-adapter (hosted on modal) for multi-view consistent diffusion, then a custom backprojection pipeline to paint the mesh’s original atlas and giving it that nice pixelated look and style.
i have the ability to animate these models using ai as well, but haven’t integrated it into the site yet because the ux need some changes before it makes sense to add that in.
latest texturing engine showcase
this was a webapp with a very simple interface. it was focused on helping people learn more about ai in byte-sized chunks (yes, pun intended), but there’s no reason it couldn’t have been opened up to learning anything about anything given the way it was developed.
you would sign up, have a short onboarding that shares your motivation and preferred learning style and tutor personality, and that’s it. then enter a topic you want to learn about, and i would use ai to quickly research the topic to come up with high level modules + a description of why that was useful.
the user could click on a module, and ai would would then dive in and quickly research that module using the contextual description to create 5-10 screen-sized small lessons. lessons were just text, but included citations and sources.
at the end of the module, there are questions based on the lessons to test your knowledge. ai would dynamically generate these questions, and they could be a simple fill in the blank, multiple choice, or short answer. ai would grade your response and give you feedback if you got it wrong to help give you some feedback.
this was a prototype that i never launched publicly or posted about, but it led to the ai tutor app above. it came pre-loaded with ai research papers, and you could open them up onto an infinite canvas. you could move and re-arrange the pages however you wanted.
on any given pdf page, you could click and drag to create a highlight box, then right click to either create a sticky note for that region, or start a chat with ai about that region. it would clip the image behind the highlight box, and send it to an llm along with context about what paper you’re reading and what page you’re on.
each paper that was “pre-loaded” was processed using unstructured.io’s open source chunking code, and weaviate as a rag database to store the chunks. i used litellm on the server as the tool using assistant/agent, and it could:
i planned to continue developing it, but thought the ai tutor path would make a better app. most people who read research papers may already use something like notebooklm, or not really feel the need to take notes or ask ai questions. folks who are newer to ai may not want to wade through research papers. so i made the shift to the ai tutor app.
(a prototype) created a separate tool that could process a video (capture frames, align them with audio transcription from AssemblyAI, and batch process them), understand it, and then answer questions.
(full web app) developed an app that uses ai to analyze the cohesiveness of a youtube video’s thumbnail, title, and first 30 seconds of audio. it used the youtube api to pull the transcript, title, and thumbnail, and litellm to have an llm generate scores and curate feedback.
(full web app) built a landing page analysis tool that could dynamically load, screenshot, and parse page text to give users actionable feedback to improve conversions and reduce bounce rate.
in march/april of 2020 covid hit and locked everything down. i was living in arlington, va at the time working at raytheon doing computer security research. my job was in-person, so it was a really weird time, but my (now) wife and i wanted to get out of the city and dc area and move to raleigh. and i decided to leave my job at raytheon and to build an nft art marketplace, where my cousin (who was an artist) would be the business guy and use his network in the art world to help, and i would be the engineer that builds the app.
i had no experience at the time with aws, production apps and workflows, blockchain; nothing. i thought it was either going to be the dumbest thing i ever did, or one of the best. turns out it was somewhere in the middle, but leaning more towards the “good” side.
i quickly learned about ethereum smart contracts and how to build apps, picked django and python as our backend, aws as our infra for the app, and infura as our infra for blockchain. i learned pretty early on that nfts were basically pointless and couldn’t for the life of me wrap my head around why people paid so much for them. they were paying for a piece of json data on a blockchain that referenced an image off-chain that could go offline at any time. and not only that, anybody could snag the jpeg you bought.
so my cousin and i decided to make our app different, starting with ownership. blockchain really limits how big of a file you can store on chain; crypto punks (i think) was able to do this though because they had small, pixelated images. but, my cousin knew that digital artists’ original files were >= ~100MB so they could be printed very large if needed. so here was our pitch and what we though differentiated us:
while this was technically fun an interesting to work on given the constraints, it was not a great business move.
cool technical things i built to enable all of this:
this was honestly a really great experience, and this was all done over the course of ~one year. but neither myself nor my cousing realized just how difficult it would be to build a two-sided marketplace. he had half of the marketplace solved (artists), but we had trouble finding buyers when platforms like Foundation, OpenSea, Makersplace, Nifty Gateway, and so on were crushing it.
the biggest lesson? we could have made more money together if we worked to get him on as many platforms as possible to sell his artwork and maximize how much money he could have gotten. and perhaps even consulted with other artists to help them do the same and perhaps take a small cut.
i found our old kickstarter video which is kind of funny to watch. you can tell we both have a lot of experience speaking in front of a camera 😂 despite how serious it looks, we actually laughed a lot at ourselves that day trying to get better and better takes.