Atlas AI is a sophisticated AI assistant platform powered by Thor 1.2 (latest) with Thor 1.0 available in stable mode. The platform provides a unified chat interface, continuous learning capabilities, and advanced features for knowledge management and conversation handling.
Thor 1.1 Expansion (historical): Thor 1.1 was expanded from 800M to 1.5B parameters (+700M parameters) with enhanced architecture for improved reasoning and longer context.
Latest Updates (Version 2.5.0 - 2026-01-29):
- β
Thor 1.2 as default model (server-side default; see
apps/chatbot/app.py) - β
Thor 1.2 architecture docs (see
docs/thor-1.2-brain-architecture.md) - β
R-series weight compaction tooling (see
scripts/compact_r_weights.py) - β
Repo cleanup: avoids committed build artifacts (e.g.
node_modules/, Python build outputs) - β Version alignment across apps + SDK metadata for this release
Previous Updates (Version 1.4.5 - 2024-12-24):
- β Thor 1.1 Major Enhancement - Multi-step autoregressive text generation (up to 512 tokens)
- β Advanced decoding strategies - nucleus sampling, top-k filtering, temperature control
- β Repetition penalty system - reduces repetitive outputs for better quality
- β Enhanced prompt engineering - better context understanding and query type detection
- β Expanded Model Architecture - Major capacity increase (2048 hidden size, 28 layers, 32 attention heads, 1.5B parameters)
- β Extended Context Window - Further expanded to 2816 tokens for comprehensive long-form analysis
- β Better knowledge integration - knowledge items added directly to generation context
- β Enhanced reasoning capabilities - multi-step thinking for complex queries
- β Improved response quality - post-processing for coherence and completeness
- β Better conversation context - uses up to 8 previous messages for context
- β Cross-chat memory system - Atlas remembers your preferences and information across all chats
- β Enhanced command system - Added /help, /clear, /remember, /forget, /info, /think, /tone commands
- β Improved "How to use Atlas" pop-up - Beautifully redesigned with comprehensive sections
- β Fixed gems localStorage saving - Gems now properly cached for faster loading
- β Major multilingual enhancements - Advanced text processing, better voice selection, optimized speech
- β Enhanced common sense prioritization - Better reasoning before web searches for natural conversation
- β Thor 1.1 released with enhanced model architecture and improved inference
- β Poseidon voice assistant with comprehensive multi-language support
atlas-ai/
βββ apps/ # All application code
β βββ chatbot/ # Main Flask app (UI + API)
β βββ app/ # Desktop application
β βββ cli/ # Command-line interface
β βββ api-packages/ # API client packages
β βββ tools/ # Development tools and scripts
β βββ assets/ # Static assets and images
βββ models/ # Model directories
β βββ thor-1.0/ # Thor 1.0 model (stable mode)
β βββ thor-1.1/ # Thor 1.1 model (legacy)
β βββ thor/thor-1.2/ # Thor 1.2 model (latest, default)
β βββ thor-lite-1.1/ # Thor Lite 1.1 model (400M parameters)
β βββ r-series/ # R-series models (see `scripts/compact_r_weights.py`)
βββ data/ # Data directories
β βββ brain/ # Knowledge store
β βββ training_data/ # Training datasets
β βββ conversations/ # Conversation history
β βββ processed_images/ # Processed image cache
β βββ logs/ # Runtime log files
β βββ metrics/ # Metrics data
βββ docs/ # Documentation
βββ config/ # Configuration files
βββ LICENSE
βββ setup.py
βββ README.md
- Python 3.8+ (Python 3.14 recommended)
- pip (Python package manager)
- Virtual environment support (venv)
-
Clone or navigate to the project directory:
cd /Users/arulhania/Coding/atlas-ai -
Create and activate virtual environment:
python3 -m venv .venv source .venv/bin/activate # On macOS/Linux # or .venv\Scripts\activate # On Windows
-
Upgrade pip and install dependencies:
pip install --upgrade pip setuptools wheel pip install -r requirements.txt
-
Verify installation:
python3 -c "import torch; import flask; print('Dependencies installed successfully')"
The platform consists of three main servers:
cd /Users/arulhania/Coding/atlas-ai/chatbot
../.venv/bin/python3 app.pyAccess: http://localhost:5000
Features:
- Unified chat interface for Thor 1.0
- Conversation history management
- Project management
- Image processing
- Continuous learning integration
cd /Users/arulhania/Coding/atlas-ai/chatbot
../.venv/bin/python3 thor_result_setter_server.pyAccess: http://localhost:5004
Features:
- Manual Q&A pair entry
- TrainX compilation support
- Edit, delete, and search curated responses
- Data stored in:
chatbot/thor_result_setter.json
Create a startup script (start_all_servers.sh):
#!/bin/bash
cd /Users/arulhania/Coding/atlas-ai
# Activate virtual environment
source .venv/bin/activate
# Start Chatbot Server
cd chatbot
python3 app.py > ../logs/chatbot.log 2>&1 &
CHATBOT_PID=$!
echo "Chatbot started (PID: $CHATBOT_PID)"
# Start Thor Result Setter
python3 thor_result_setter_server.py > ../logs/thor_result_setter.log 2>&1 &
THOR_PID=$!
echo "Thor Result Setter started (PID: $THOR_PID)"
echo ""
echo "All servers started!"
echo "Chatbot: http://localhost:5000"
echo "Thor Result Setter: http://localhost:5004"
echo ""
echo "To stop all servers: kill $CHATBOT_PID $THOR_PID"Make it executable:
chmod +x start_all_servers.shTo talk to Thor 1.2 (or Thor 1.0 / Thor 1.1) right away:
-
Start the chatbot on port 5002 (avoids conflict with port 5000, e.g. AirPlay on macOS):
./start_chatbot_thor11.sh
Then open http://localhost:5002 in your browser for the full chat UI.
-
Or use the CLI (requires the chatbot to be running):
# One-shot message python ask_model.py "What is 2+2?" # Interactive mode python ask_model.py # Use Thor 1.0 if needed python ask_model.py --model thor-1.0 "Hello"
- Thor 1.2: Latest model (default) β see
docs/thor-1.2-brain-architecture.md - Thor 1.1: Legacy model (still supported)
- Thor 1.0: Stable model with proven reliability (used in stable mode)
- Gems: Custom sub-models that you can create, customize, and use for specialized tasks
- Try Before Create: Test gem configurations without saving
- Custom Instructions: Define how each gem should behave
- Tone Control: Set tone (Normal, Friendly, Calm, Formal, Critical) for consistent style with enhanced impact on responses
- Source Integration: Add links and files as knowledge sources that are prioritized over web search
- One-Line Management: View and manage gems in the sidebar with metallic-colored gem names (based on tone) and edit/delete actions on the same line
- Auto-Trainer: Automatically trains on conversations every 30 minutes
- Brain System: Organized knowledge storage by letter/keyword
- Research Engine: Web search integration for unknown topics
- Learning Tracker: Monitors and records learning progress
A domain-specific language for defining Q&A pairs with advanced features:
Q: What is Python?
A: Python is a high-level programming language known for its simplicity and readability.
Q: {"What is Python?" / "Tell me about Python" / "Python info"}?
A: Python is a high-level programming language known for its simplicity and readability.
This generates three Q&A pairs:
- Q: "What is Python?" β A: [answer]
- Q: "Tell me about Python" β A: [answer]
- Q: "Python info" β A: [answer]
The first alias is treated as canonical for internal reference.
Q (Image): Thor
A: https://upload.wikimedia.org/wikipedia/en/3/3c/Chris_Hemsworth_as_Thor.jpg
- The question is stored as
Create an image of Thorfor clarity. - The pair is tagged as
type: imageand the Result Setter renders an iframe + still preview from the URL. - Aliases work too:
Q (Image): {"puppy" / "dog"}will generate image pairs for each alias.
- Authoritative Answers: Pre-set responses for specific questions
- Fuzzy Matching: Handles variations in question phrasing
- TrainX Integration: Bulk import via TrainX compilation
- Manual Management: Web interface for editing Q&A pairs
- Chat History: All conversations saved in
chatbot/chats/ - Conversation Archive: Backup copies in
apps/chatbot/conversations/ - Project Organization: Group related chats into projects
- History Tracking: Comprehensive history system
Gems allow you to create specialized AI assistants tailored to specific tasks or domains:
- Create Gems: Define custom instructions, tone, and knowledge sources
- Try Before Save: Test gem configurations without committing
- Source Integration:
- Add links (up to 5 URLs) β automatically fetched and parsed for content
- Add files (up to 10 text files) β uploaded content used as context
- Gem sources are always prioritized over web search results
- Tone Control: Choose from Normal, Friendly, Calm, Formal, or Critical tones
- Model Dropdown: Select gems from the model selector alongside Thor 1.0
- Sidebar Management: View all gems with name, tone badge, and quick edit/delete actions
Example Use Cases:
- Study Buddy: Explains concepts step-by-step, then quizzes you
- Product Manager: Turns ideas into PRDs, risks, and roadmaps
- Design Critic: Provides direct UI/UX critique with actionable fixes
- Custom Domain Expert: Add specialized knowledge via sources for domain-specific assistance
For faster access to common features, use these command shortcuts:
/office- Opens the Office Suite interface/arcade- Opens the Game Suite interface/image {description}- Generates an image based on description (e.g.,/image beautiful sunset)
These commands work alongside natural language requests (e.g., "Load Office Suite" still works).
Poseidon is a comprehensive voice assistant feature that provides live, conversational interactions similar to Gemini Live:
- Live Voice Interaction: Real-time speech recognition and text-to-speech responses
- Full-Screen Interface: Immersive overlay with visual feedback
- Animated waveform visualizer that responds to listening/speaking states
- Status indicators (Ready/Listening/Speaking/Processing)
- Live transcript display for both user input and assistant responses
- Voice Customization: Configure in Settings modal
- Accents: US English, UK English, Australian English, Indian English
- Gender: Male or Female voices
- Settings persist across sessions
- Session Controls:
- Hold/Pause: Temporarily pause listening and speaking
- End: Close Poseidon and return to text chat
- Auto-Continuation: Automatically restarts listening after each response for seamless conversation flow
- Full Integration: Works with all models (Thor 1.0 and Gems), tones, Think Deeper mode, and all existing features
Access: Click the golden trident icon button (round blue button) in the input area (left side, before the attach button)
Browser Support: Requires Chrome, or Edge
Features:
- Continuous Recognition: Automatically continues listening after each response
- Permission Handling: Explicitly requests microphone permission before starting
- Error Recovery: Intelligent error handling with automatic retry for common issues
- Large Text Support: Backend automatically refines large text chunks for better understanding
- Secure Context: Automatically checks for HTTPS/localhost and provides helpful error messages
- Fast Response: Optimized for speed with reduced delays (300ms restart, 50ms auto-restart)
- Backend Validation: Comprehensive checks for browser support, secure context, and DOM elements
Command Shortcuts:
/office- Quickly open Office Suite/arcade- Quickly open Game Suite/image {description}- Generate an image (e.g.,/image sunset over mountains)
Settings Options:
- Stable Mode: Disables latest features (Poseidon, Think Deeper) and automatically applies simpler UI for maximum stability. Uses Thor 1.0 model.
- Simpler UI Mode: Minimalist interface hiding non-essential buttons (Think Deeper, History, Customize, Help, Upgrade, Model Selector) for a cleaner experience
Troubleshooting Poseidon:
- If you see "Service Unavailable", check browser microphone permissions
- Circuit breaker automatically stops infinite error loops
- Browser-specific guidance is provided in error messages
- Requires HTTPS or localhost for security
- Supported browsers: Chrome, Edge
- Think Deeper Mode: Enhanced reasoning for complex queries
- Image Processing: Upload and analyze images with support for style/angle/color tweaks
- Code Mode: Specialized code assistance
- Semantic Relevance: Intelligent knowledge filtering
- Response Cleaning: Automatic response validation and cleaning
- Enhanced Tone Impact: Tones now have significantly stronger, more consistent impact on response style and content
Type exactly "I am in C5." in the chat interface to trigger a celebratory animation! π
GET /- Main chat interfacePOST /api/chat- Send chat messageGET /api/chats- List all chatsGET /api/chats/<chat_id>- Get specific chatDELETE /api/chats/<chat_id>- Delete chatGET /api/projects- List projectsPOST /api/projects- Create projectGET /api/history- Get historyGET /api/model/status- Check model statusGET /api/gems- List all gemsPOST /api/gems- Create a new gemGET /api/gems/<gem_id>- Get specific gemPUT /api/gems/<gem_id>- Update gemDELETE /api/gems/<gem_id>- Delete gem
GET /- Result setter interfaceGET /api/qa/list- List all Q&A pairsPOST /api/qa/add- Add new Q&A pairPOST /api/qa/update- Update existing Q&A pairPOST /api/qa/delete- Delete Q&A pairPOST /api/qa/search- Search Q&A pairsPOST /api/trainx/compile- Compile TrainX code
Thor 1.0: models/thor-1.0/config/config.yaml
Configuration is managed in apps/chatbot/app.py:
- Model directories
- Chat storage paths
- Result setter file paths
- Port settings
Create a .env file in the root directory (optional):
FLASK_ENV=development
SECRET_KEY=your-secret-key-here
MODEL_PATH=path/to/modelsThe Brain System organizes knowledge by letters and keywords:
data/brain/
βββ A/
β βββ keywords.json
βββ B/
β βββ keywords.json
...
βββ Z/
βββ keywords.json
Each keywords.json contains:
- Letter identifier
- Keywords list
- Knowledge entries with content, source, and timestamps
- Last updated timestamp
Q: Your question here?
A: Your answer here.
List myList = [
"key1": "value1",
"key2": "value2"
]
Q: {"Canonical Question" / "Alias 1" / "Alias 2"}?
A: Single answer for all variations.
# This is a comment
Q: Question?
A: Answer.
This project is licensed under the Apache 2.0 License. See the LICENSE file for details.
(LICENSE in the repo root). In short: internal evaluation and research
use are allowed; redistribution, commercial hosting, or model-training
derivatives outside this project are prohibited without written
permission. See the full LICENSE text for all terms, conditions,
limitations, and warranty disclaimers.
-
Check if port is in use:
lsof -i :5000 # For chatbot lsof -i :5004 # For Thor result setter
-
Kill existing processes:
pkill -f "app.py" pkill -f "result_setter_server"
-
Check virtual environment:
source .venv/bin/activate which python3 # Should point to .venv/bin/python3
-
Verify sys.path setup:
- Check that
thor-1.0is added beforeodin-0.5in sys.path - Verify all required modules exist
- Check that
-
Reinstall dependencies:
pip install -r requirements.txt --force-reinstall
-
Check model files exist:
ls -la models/thor-1.0/models/final_model.pt ls -la models/thor-1.0/models/tokenizer.json
-
Verify config paths:
- Check
config/config.yamlexists - Verify paths in
app.pyare correct
- Check
-
Check directory permissions:
ls -la chatbot/chats/ ls -la apps/chatbot/conversations/
-
Verify directory creation:
- Directories should be created automatically
- Check logs for permission errors
- Chatbot:
/tmp/chatbot.logorlogs/chatbot.log - Thor Result Setter:
/tmp/thor_result_setter.logorlogs/thor_result_setter.log
# Real-time log viewing
tail -f /tmp/chatbot.log
# Last 50 lines
tail -50 /tmp/chatbot.log
# Search for errors
grep -i error /tmp/chatbot.log-
Development Server Warning:
- Flask's development server is NOT suitable for production
- Use a production WSGI server (Gunicorn, uWSGI) for deployment
-
Secret Key:
- Change
app.secret_keyin production - Use environment variables for sensitive data
- Change
-
CORS:
- Currently allows all origins (
CORS(app)) - Restrict in production:
CORS(app, origins=["https://yourdomain.com"])
- Currently allows all origins (
-
Use Production WSGI Server:
pip install gunicorn gunicorn -w 4 -b 0.0.0.0:5000 app:app
-
Set Environment Variables:
export FLASK_ENV=production export SECRET_KEY=your-production-secret-key
-
Use Process Manager:
- systemd (Linux)
- supervisor
- PM2 (Node.js process manager)
FROM python:3.14-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:5000", "apps.chatbot.app:app"]- Follow the existing code structure
- Add comments for complex logic
- Update documentation for new features
- Test changes thoroughly before committing
The Licensor grants the Licensee a limited, non-exclusive, and non-transferable right to Use the compiled, object-code version of this software solely for its intended purpose. The Licensee is strictly prohibited from accessing, viewing, copying, distributing, or modifying the Source Code. Furthermore, the Licensee shall not reverse engineer, decompile, or disassemble the software, nor shall they distribute, sublicense, or publicly display the software or any derivative works. All rights, title, and intellectual property ownership remain solely with the Licensor.
For issues, questions, or contributions:
- Check the troubleshooting section
- Review logs for error messages
- Verify all dependencies are installed
- Ensure virtual environment is activated
- v1.0.2 β Major algorithm improvements: Gems now intelligently synthesize sources instead of reading verbatim, Thor search results are properly synthesized from multiple sources, improved intent detection to avoid treating commands as search queries, and removed hardcoded context labels.
- v1.0.1 β Refinement pass for responses (removed debug-style footers like
_Sources:_ β¦and_Context-aware: follow-up detected, and made small-talk/goodbye handling less likely to trigger web search). - v1.0.0 β Initial release with Thor 1.0
- TrainX alias syntax support
- Continuous learning system
- Result setter integration
Last Updated: January 29, 2026 Current Version: 2.5.0 Maintained by: Atlas AI Development Team