← Back to Analyzer
🧠 How Our Model Works
Understanding the Multi-Task Learning architecture behind the AI Text Analyzer
🏗️ Multi-Task Learning Architecture
Our model uses a Multi-Task Learning (MTL) approach with shared layers to simultaneously predict three different aspects of text:
😊 Emotion Detection
6 emotions: sadness, joy, love, anger, fear, surprise
💬 Hate Speech
3 classes: offensive speech, Neither, Hate Speech
⚠️ Violence Type
5 types: sexual, physical, emotional, harmful traditional practice, economic
🔄 Architecture Flow:
1Input Layer: Text tokenized into sequences (max 50 tokens)
2Embedding Layer: Shared 128-dimensional word embeddings
3LSTM Layer: Shared 64-unit LSTM for sequence processing
4Pooling & Dropout: Global average pooling + 50% dropout
5Output Layers: 3 separate dense layers with softmax activation
📊 Training & Dataset
📚 Datasets Used:
- •Emotions: 12,000 balanced samples (2,000 per emotion)
- •Hate Speech: ~19,000 samples with balanced classes
- •Violence: ~20,000 samples across 5 violence types
⚙️ Training Configuration:
- •Optimizer: Adam
- •Loss: Sparse Categorical Crossentropy
- •Epochs: 10
- •Batch Size: 4
✨ Why Multi-Task Learning?
🎯 Better Generalization
Shared layers learn common linguistic patterns across tasks, improving overall performance.
⚡ Efficient Training
Single model handles multiple tasks, reducing computational resources.
🔄 Knowledge Transfer
Learning from one task helps improve performance on related tasks.
📈 Comprehensive Analysis
Get insights into multiple aspects of text simultaneously.