Sub ticket Tanisha CNN implementation

Week 1: Planning and Data Preparation

Day 0: Sprint Planning

  • - Define the project scope and objectives.
  • - Set up a Scrum board or project plan (like this).
  • - Kick-off meeting to clarify goals and expectations.

Day 1: Data Collection

  • Gather a dataset of facial images labeled with emotions ( Kaggle’s FER-2013 dataset) image

  • Clean the dataset by removing irrelevant or low-quality images. image

image

Day 2: Data Preprocessing

  • Resize images to a consistent format (e.g., 224x224 pixels).

    Edit: pre cleaned data set, no need for this!

  • Normalize pixel values to a common scale (e.g., [0, 1]). image

  • Split the dataset into training, validation, and test sets.
  • Augment the training dataset (e.g., rotate, flip, zoom) to increase diversity. ENSURE EQUAL DIST.

image

Day 3: Model Architecture Design

  • Decide on a CNN architecture (e.g., VGG16, ResNet, or custom architecture).
  • Define the layers, filters, and activation functions for the model. image

    Model weights and architecture defined.

Day 4: Model Implementation

  • Start coding the CNN model in a deep learning framework like TensorFlow or PyTorch.
  • Implement forward and backward passes, loss function, and optimization algorithm (e.g., Adam).
  • Ensure the model can handle the input image size and number of classes.

Day 5: Training ####(CURRENT) image

  • Train the model on the preprocessed dataset.
  • Monitor training metrics such as loss and accuracy.
  • Use early stopping if validation performance stalls.
  • Document training progress on your Scrum board.

Day 6: Model Evaluation

  • Evaluate the model’s performance using the test set.
  • Calculate metrics like accuracy, precision, recall, and F1-score.
  • Identify potential sources of bias or error and document them.

Week 2: Model Optimization and Deployment

Day 7: Hyperparameter Tuning

  • Fine-tune hyperparameters, like learning rate, batch size, or dropout rates.
  • Experiment with different CNN architectures to improve performance.

Day 8: Error Analysis

  • Analyze model errors on the test set to understand common misclassifications.
  • Adjust the model or dataset to address error patterns.

Day 9: Model Interpretability

  • Implement visualization techniques (e.g., Grad-CAM) to understand what the model focuses on when making predictions.

Day 10: Model Optimization

  • Implement model optimization techniques (e.g., weight quantization or model pruning) for deployment on resource-constrained devices.

Day 11: Deployment Setup

  • Set up the infrastructure and environment for model deployment (e.g., cloud server or edge device).
  • Create an API or interface for the model.

Day 12: Testing and Validation

  • Test the deployed model to ensure it functions correctly in the production environment.
  • Validate the model’s real-world performance.

Day 13: Documentation and Training

  • Document the model architecture, training process, and deployment steps.
  • Train end-users or colleagues on using the model effectively.

Day 14: Final Review and Retrospective

  • Review the project’s achievements and ensure all requirements are met.
  • Conduct a retrospective meeting to discuss what went well and what could be improved in the development process.
  • Plan for future iterations or improvements.