Week 2

General Experience

This week has been better than General Experience This week has been better than the last week. I have continued working on the empirical experiment to prove the maths I have worked out. I have finished beginner tutorials for PyTorch. I have read through causal graphical models. I also learned about the causal discovery algorithm. Moreover, I have also reviewed neural networks and the mathematics behind it. However, I still do believe that I need a lot of practice, e.g. writing a bunch of experiments, to get used to these tools. I learned a lot from designing my first empirical experiement. In addition, I have reviewed my Python and NumPy coding skills. Overall, it has been a very enriching week, and I am excited for next week!

Readings (Litrature Review and online blogposts and tutorials) I was also still reading Invariant Models for Causal Transfer Learning paper, mainly focusing on proposition 2 and how to reapply it using different conditions. I have also used causal inference blogposts and reviewed my probability theory knowledge regarding expectations and independence. Overall, not so much reading this week but mostly mechanical work. In addition, I have also gone through this paper Causal inference in statistics: An overview

Experiments & Algorithms Math and conceptual understanding I have been tasked with a problem this week. Basically, for a structural causal model, we have X, Y, Z. We want to find the conditions where the anticausal model, that is a model depending on X and Z: (X,Z)->Y, will perform better than the causal model, a model only depnding on the causal random variable X: (X)->(Z). Here is a snip of the equations that we have: image

After solving this with controlled conditions, we found that we need to keep the influence of the exogenous factors for the transfer model (eta transfer) less than the influence of the exogenous factors of the training model and the internal biases within our model. Also, most of the mathematical foundations where based on this paper.

Experiement using Jupyter notebooks For this experiement, I have followed these steps:

I generated a training dataset, with randomly generated data distributions of X, Y, and Z. I have also generated random weights. Then, I generated a test dataset, with random weights for their data distributions (so we have source and test data coming from different data distributions). I created 2 arrays: the first has 200 different values for eta source ranging from 0.01 to 1.99, the second does the same thing for eta transfer. I calculated the expected loss using a mean squared error function, i.e. MSE, for both the causal and the anticausal model when varying the etas. I have plotted the different values of MSE for both the causal and anticausal models with respect to the eta. Then, I have observed the behavior of the error as we increase the etas and how the math proves its workability. Results and Findings Experiment results After following the testing algorithm and plotting the curves, these are some of the plots that I got for different random parameters (shown in green and purple lines). image

So we can see that when we increase the eta transfer, the anticausal error becomes higher than the causal error. Consequently, making the anticausal model worse and riskier than the causal model. However, we interested in the period where the anticausal model performs better than the causal model.

Research Findings Through reading and discussing with my mentor, I have found that in certain conditions, using the anticausal model can produce better results than the causal model; however, we need to devise a method that can tell a system which model to follow. This can be done through an analysis of errors and investigation of how well each model performs. Causal relationship assume independence between random variables; however, anticausal relationships do not assume so. Therefore, we have exogenous factors that influence our error. The problem is that we don’t see these exogenous factors, and in some datasets, we are not told whether a random variable is causal or anticausal. We are interested in answering causal questions, not assocational questions because causal assumptions help us identify relationships that remain invariant when external conditions change. Frustrations My frustrations this week were miniscule. The biggest out of them was debugging my Python code and getting used to using NumPy and matplotlib again. I guess this problem will continue until week 4 or 5. By then, I will have written enough code to utilize them smoothly without going back and forth from my code to the documentation.

Plans for Next Week Our plans for next week is to read this article: Review of Causal Discovery Methods Based on Graphical Models. Then, we will develop an experimental framwork to:

Generate data from a given Structural Causal Model (SCM). Apply a causal discovery algorithm to generated a mixed graph. Exrract parents and children of outcome variable from graph. Generate distinct DataFrames for parents, outcome variable, and children. Learn a predictive model given a specified learning framework. Analyze the statistical data from given target datasets. Other things worth sharing I am very happy and excited about the next steps. I have learn a lot of causal inference, but I guess there is still a lot of work to do. This experience made wonder about the real world applications, especially in health, behavioral, and social science. I guess after learning about causal inference, I will be able to look into issues that interests me, issues that are often marred by data bias.the last week. I have continued working on the empirical experiment to prove the math I have worked out. I have finished beginner tutorials for PyTorch. I have read through causal graphical models. I also learned about the causal discovery algorithm. Moreover, I have also reviewed neural networks and the mathematics behind it. However, I still do believe that I need a lot of practice, e.g. writing a bunch of experiments, to get used to these tools. I learned a lot from designing my first empirical experiement. In addition, I have reviewed my Python and NumPy coding skills. Overall, it has been a very enriching week, and I am excited for next week!

Readings (Litrature Review and online blogposts and tutorials)

I was also still reading Invariant Models for Causal Transfer Learning paper, mainly focusing on proposition 2 and how to reapply it using different conditions. I have also used causal inference blogposts and reviewed my probability theory knowledge regarding expectations and independence. Overall, not so much reading this week but mostly mechanical work. In addition, I have also gone through this paper Causal inference in statistics: An overview

Experiments & Algorithms

Math and conceptual understanding

I have been tasked with a problem this week. Basically, for a structural causal model, we have X, Y, Z. We want to find the conditions where the anticausal model, that is a model depending on X and Z: (X,Z)->Y, will perform better than the causal model, a model only depnding on the causal random variable X: (X)->(Z). Here is a snip of the equations that we have: image

After solving this with controlled conditions, we found that we need to keep the influence of the exogenous factors for the transfer model (eta transfer) less than the influence of the exogenous factors of the training model and the internal biases within our model. Also, most of the mathematical foundations where based on this paper.

Experiement using Jupyter notebooks

For this experiement, I have followed these steps:

  1. I generated a training dataset, with randomly generated data distributions of X, Y, and Z. I have also generated random weights.
  2. Then, I generated a test dataset, with random weights for their data distributions (so we have source and test data coming from different data distributions).
  3. I created 2 arrays: the first has 200 different values for eta source ranging from 0.01 to 1.99, the second does the same thing for eta transfer.
  4. I calculated the expected loss using a mean squared error function, i.e. MSE, for both the causal and the anticausal model when varying the etas.
  5. I have plotted the different values of MSE for both the causal and anticausal models with respect to the eta.
  6. Then, I have observed the behavior of the error as we increase the etas and how the math proves its workability.

Results and Findings

Experiment results

After following the testing algorithm and plotting the curves, these are some of the plots that I got for different random parameters (shown in green and purple lines). image

So we can see that when we increase the eta transfer, the anticausal error becomes higher than the causal error. Consequently, making the anticausal model worse and riskier than the causal model. However, we interested in the period where the anticausal model performs better than the causal model.

Research Findings

  • Through reading and discussing with my mentor, I have found that in certain conditions, using the anticausal model can produce better results than the causal model; however, we need to devise a method that can tell a system which model to follow. This can be done through an analysis of errors and investigation of how well each model performs.
  • Causal relationship assume independence between random variables; however, anticausal relationships do not assume so. Therefore, we have exogenous factors that influence our error. The problem is that we don’t see these exogenous factors, and in some datasets, we are not told whether a random variable is causal or anticausal.
  • We are interested in answering causal questions, not assocational questions because causal assumptions help us identify relationships that remain invariant when external conditions change.

    Frustrations

    My frustrations this week were miniscule. The biggest out of them was debugging my Python code and getting used to using NumPy and matplotlib again. I guess this problem will continue until week 4 or 5. By then, I will have written enough code to utilize them smoothly without going back and forth from my code to the documentation.

    Plans for Next Week

    Our plans for next week is to read this article: Review of Causal Discovery Methods Based on Graphical Models. Then, we will develop an experimental framwork to:

    1. Generate data from a given Structural Causal Model (SCM).
    2. Apply a causal discovery algorithm to generated a mixed graph.
    3. Exrract parents and children of outcome variable from graph.
    4. Generate distinct DataFrames for parents, outcome variable, and children.
    5. Learn a predictive model given a specified learning framework.
    6. Analyze the statistical data from given target datasets.

      Other things worth sharing

      I am very happy and excited about the next steps. I have learnt a lot of causal inference, but I guess there is still a lot of work to do. This experience made wonder about the real world applications, especially in health, behavioral, and social science. I guess after learning about causal inference, I will be able to look into issues that interests me, issues that are often marred by data bias.

Written on June 13, 2021