Friday, May 6, 2022

 

GLS (General Least Square Model) Assumptions

                Tanaka and Huba (1989) discover that if the third or fourth criteria of the normal linear regression model are violated, i.e., if the random components do not have constant variance or correlate, the generalized least squares method (GLS) is utilized. This is indicative of a heterogeneous population made up of incredibly dissimilar units.

                The fundamental distinction between the normal generalized regression model and the U is a random component covariance matrix. The identity matrix C 0 is considered to be equivalent to the identity matrix in the typical model. The residuals' covariance (thus, variance and correlation) is presumed to be arbitrary in the generalized model; hence, the matrix C 0 might have arbitrary values. This is the essence of the normal model's generalization.

                The results of a generalized regression model using classic (conventional) OLS are consistent and unbiased. However, these computations become ineffective. As a result, the parameters of the generalized model are estimated using generalized least square models.

                A normal linear regression model's initial premise is that the explanatory variables x j (j = 1; m) are deterministic (non-stochastic). Cook and Weisberg (1994) assert that this implies that explanatory variables would remain static if the regression analysis were repeated. The dependent variable y value will change when the random component values in the new sample vary.

Other assumptions are:

         If all of a model's equations are correctly identified, it is said to be accurately identified.

         If there is at least one unidentified model among the model's equations, the model is considered unidentified.

         If there is at least one overidentified model among the model's equations, the model is termed overidentified.

         An equation is said to be precisely recognized if the coefficients of the simplified model can find the structural parameter estimates uniquely (uniquely).

         An equation is overidentified if more than one numerical value can be derived for some structural parameters.

         If estimations of an equation's structural parameters can't be found, it's considered unidentified.

Transforming Variable to Linear

                The structural form of the model describes a real phenomenon or process. Most often, natural phenomena or processes are so complex that systems of independent or recursive equations are not suitable for their description. Therefore, they resort to systems of simultaneous equations. The parameters of the structural form are called structural parameters or coefficients. MacKinnon and Magee (1990) find that some of the structural form equations can be represented in the form of identities, that is, equations of a given form with known parameters.

It is easy to move from the structural form to the so-called reduced form of the model. The reduced form of the model is a system of independent equations in which all the current endogenous variables of the model are predefined.

R procedures for Linear Regression

Linear regression using R comes in two flavors multiple and simple. Below is an example of the simple linear regression using R and R studio:

Linear regression calculation required only an atomic variable (independent).

Step A: R should be used to load the data.

For each dataset, follow these four steps: Go to File > Import dataset > From Text in RStudio (base).

summary (imported data)

When we call this function, we get a table in our console with a numeric summary of the data because both of our variables are quantitative. This gives us the independent variable's (var1) and dependent variable's (var2) lowest, median, mean, and maximum values.

Step B: Ensure the data assumptions are valid

We may use R to see if our data meets the four fundamental linear regression assumptions.

Observational independence (aka no autocorrelation)

                Jajo (2005) finds that there is no need to evaluate for hidden relationships among variables because there is only one independent variable and one dependent variable. Do not use a simple linear regression if autocorrelation is required within variables, for instance, numerous observations of the same study participant. Instead, use a structured model, such as a linear mixed-effects model.

Use the hist () function to see if the dependent variable has a normal distribution.

hist ("your data")

Histogram of simple regression

 

It is safe to proceed with the linear regression if the results show a bell curve with more observations in the distribution center and fewer on the tails.

Step C: Construct a linear regression model.

Perform a linear regression analysis to evaluate specific associations between variables (independent - dependent) if the data match the assumptions.

To see if the observed data matches our model assumptions, run plot(var):

Step D is the final step. To see how the results of the simple linear regression can be visualized, simply use the ggplot package and plot the data points on a graph.

 

 

 

References

 

Cook, R. D., & Weisberg, S. (1994). Transforming a response variable for linearity. Biometrika, 81(4), 731-737.

 

Jajo, N. K. (2005). A review of robust regression and diagnostic procedures in linear regression. Acta Mathematicae Applicatae Sinica, 21(2), 209-224.

 

MacKinnon, J. G., & Magee, L. (1990). Transforming the dependent variable in regression models. International Economic Review, 315-339.

 

Tanaka, J., & Huba, G. (1989). A general coefficient of determination for covariance structure models under arbitrary GLS estimation. British Journal of Mathematical and Statistical Psychology, 42(2), 233-239.

 

 Model First Explainable Artificial Intelligence (XAI)

Overview

Explainable artificial intelligence, or XAI, has recently sparked much interest in the scientific community. This study addresses the issue of complicated machines and algorithms failing to provide insight into methods or paradigms that can be leveraged before or during data input that will measure success through behavior and thought processes of unexplainable results. XAI makes aspects of users' and internal systems' decisions more transparent by offering granular explanations. Amparore et al. (2021) assert that these explanations are critical for guaranteeing the algorithm's fairness, detecting recursive training data issues, and promoting robust algorithms for predictions.

On the other hand, the initial process or technological "first step" that leads to interpretations is not standardized and has not been rigorously evaluated. Islam et al. (2021) discover that initial framework-based methods perform an excellent job of achieving explicit pre-determined outputs on the training data but do not reflect the nuanced implicit desires of design-modeling of integrated human components coalesced with input data is extremely rare with very little supporting literature. This research introduces basic structuring notions of modeling and illustrates how to utilize social constructs of human input and relevant data to build best practices and discover open challenges.

This study will also articulate why existing pre-emptive XAI paradigms on deep neural networks have significant advantages. Finally, the research will address possible future research directions of pre-emptive XAI structuring and modeling as a direct result of research discovery and findings (Chromik & Schuessler, 2020). 

Why Model First XAI Study Is Needed

XAI assumes that the end-user is given an explanation based on the AI system's decision, recommendation, or operation. Fellous et al. (2019) find that few conceptual models or paradigms in the form of models increase the likelihood of interpretability before or during the development and implementation of the system. A computer analyst, doctor, or lawyer, for example, could be one of the participants. On the other hand, as Booth (2020) suggests, teachers or C-level executives may be expected to explain the system and grasp what has to be fixed. Another user, on the other hand, could be judged to be biased against the system's fairness. Each user group can cause bias, which can lead to preferential or non-preferential interpretations, which can have a negative impact on the information and the conclusion.

            Before implementing XA1, a practical Model can give preliminary consideration to a system's intended user group, using their background knowledge and needs for the content by fusing explainability. Xu et al. (2019) instruct us that XAI is a well-integrated framework, yet it falls short due to its reliance on "interior" frameworks like explainability primarily through modeling. In addition, there are several third-party frameworks available, each of which covers a particular atomic scope of the XAI but none of which address the human interaction, data, or science components collectively through design modeling that determine the level of success in the resulting interpretability (Zeng, 2021).
Figure 1 Blog Post

Conceptual Structure-Abstraction and Encapsulation

            Many methods have been proposed to evaluate and measure the effectiveness of interpretation; however, as Ehsan and Riedl (2020) find, very few have been devised as model drivers that define interpretability valuation from the onset of XAI implementation. However, there are no general modeling paradigms to measure whether XAI systems will be more interpretable from concept to deployment (Ehsan et al., 2021). From a modeling point of view, metrics could be derived from conceptually represented feelings or behavior of participants, which unlocks patterns of subjective or non-subjective components of a description. Olds et al. (2019) state that abstract objective modeling can represent and communicate dependable and consistent measurements of XAI interpretation valuation.  A research question persists in modeling factors that directly affect interpretation to ascertain output valuation that drives success or failure.

            Design-first modeling fosters the examination and predictive structuring of the XAI component before applying any evaluation framework allowing common ground between the human participant and the training data. Gaur et al. (2020) assert that modeling capabilities and knowledge from a human-centered research perspective can enable XAI to go beyond explaining specific XAI systems and helping its users determine appropriate trust roles. In the future, XAI model first designs will eventually play an essential role in the deterministic valuation of the outputs.  Islam (2020) examines the XAI principle and states that the behavior of artificial intelligence should be meaningful to humans, but without model-first design, understanding and explaining in different ways may still become convoluted and cumbersome, especially for questions at different levels. (Islam, 2020)The model first AI ensures human practitioners can be factored in before the implementation of XA, establishing criteria for reliability through patterns because they possess existing subject matter knowledge of the injected data. Incorporating a lawyer into the data design characteristics model can determine the level of causality of interpretation of his client's actions and relative contributions of several court cases to check if his defense is conducive to the legal guidelines.

 

 

 

References

 

Amparore, E., Perotti, A., & Bajardi, P. (2021, 2021 Apr 16

2021-04-17). To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods. PeerJ Computer Science. https://doi.org/http://dx.doi.org/10.7717/peerj-cs.479

Booth, S. L. (2020). Explainable AI foundations to support human-robot teaching and learning Massachusetts Institute of Technology].

Chromik, M., & Schuessler, M. (2020). A Taxonomy for Human Subject Evaluation of Black-Box Explanations in XAI. ExSS-ATEC@ IUI,

Ehsan, U., & Riedl, M. O. (2020). Human-centered explainable ai: Towards a reflective sociotechnical approach. International Conference on Human-Computer Interaction,

Ehsan, U., Wintersberger, P., Liao, Q. V., Mara, M., Streit, M., Wachter, S., Riener, A., & Riedl, M. O. (2021). Operationalizing Human-Centered Perspectives in Explainable AI. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems,

Fellous, J.-M., Sapiro, G., Rossi, A., Mayberg, H., & Ferrante, M. (2019, 2019 Dec 13

2020-02-28). Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation. Frontiers in Neuroscience. https://doi.org/http://dx.doi.org/10.3389/fnins.2019.01346

Gaur, M., Desai, A., Faldu, K., & Sheth, A. (2020). Explainable AI Using Knowledge Graphs. ACM CoDS-COMAD Conference,

Islam, M. A., Veal, C., Gouru, Y., & Anderson, D. T. (2021). Attribution Modeling for Deep Morphological Neural Networks using Saliency Maps. 2021 International Joint Conference on Neural Networks (IJCNN),

Islam, S. R. (2020). Domain Knowledge Aided Explainable Artificial Intelligence (Publication Number 27835073) [Ph.D., Tennessee Technological University]. ProQuest One Academic. Ann Arbor. https://coloradotech.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/domain-knowledge-aided-explainable-artificial/docview/2411479372/se-2?accountid=144789

Olds, J. L., Khan, M. S., Nayebpour, M., & Koizumi, N. (2019). Explainable ai: A neurally-inspired decision stack framework. arXiv preprint arXiv:1908.10300.

Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., & Zhu, J. (2019). Explainable AI: A brief survey on history, research areas, approaches and challenges. CCF international conference on natural language processing and Chinese computing,

Zeng, W. (2021). Explainable Artificial Intelligence for Better Design of Very Large Scale Integrated Circuits (Publication Number 28719980) [Ph.D., The University of Wisconsin - Madison]. ProQuest One Academic. Ann Arbor. https://coloradotech.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/explainable-artificial-intelligence-better-design/docview/2572576626/se-2?accountid=144789

 

 Sharing a Domain Device with Threads

When numerous threads attempt to access a domain device simultaneously, we enter into the realm of a Thread-Safe environment. When more than one thread competes for access to an object simultaneously, some threads may receive an invalid state if another thread modifies the resource at the same time. For example, if a robot attempt to access objects on a conveyor belt while another robot is operating on that same conveyor belt, the first thread (robot) may receive an invalid state of access (Zhai et al., 2012). This situation leads to what is known as a race condition.

What's notable is the profound ability to use synchronization techniques that prevent thread interference of shared objects through calls to a method that will lock the thread. In this scenario, no other thread can access the conveyor belt until completing the current thread's (robot's) execution (Basile et al., 2002).

.

The Semaphore Safe Thread Regarding the Robot and the Conveyor

We can use semaphores when dealing with a conveyor system in which there is a single robot that needs access to a multi-accessed conveyor system.

Preventing simultaneous access to the conveyor system is an issue and can be solved using a semaphore. In the robot-conveyor case, the robot must wait before accessing an atomic conveyor system until the semaphore is in a state that allows access to acquire objects. When the robot accesses the conveyor belt, the semaphore changes state to stop other robots from accessing the conveyor system (Moiseev, 2010). A robot that has completed it's conveyor access operations changes the semaphore state to permit another robot to access the conveyor belt.

In the robot-conveyor scenario, a semaphore could be a simple number. More specifically, a robot waits for permission to proceed. It then messages other robots a flag indicating it has proceeded by implementing a numerical value on the semaphore's operation.

The Garage State Machine

Below is a state machine that depicts the state of a garage door opening and closing operations provided with specific time and emergency conditions. It also illustrates state changes with cause and effect of additional actions based on either remote control triggers or emergency conditions. Many of the states shown in the diagram below refer to the various inputs from devices that affect the garage's behavior. To fully demonstrate the garage's multiple states, I provided a comprehensive state machine diagram that presents states' full visualization and how the garage door transitions to each state (Alonso et al., 2008).

The state diagram below (fig.1) begins with a green start icon and a square titled "garage closed" as the initial state and ends with a square titled "garage closed" as the final state. However, as you will witness, many conditions and triggers affect the garage door events' progression. The behavior of the garage door is illustrated clearly in transitory order from garage operational state to state.



References

 


Alonso, D., Vicente-Chicote, C., Pastor, J. A., & Álvarez, B. (2008). Stateml+: From graphical state machine models to thread-safe ada code. International Conference on Reliable Software Technologies,

Basile, C., Whisnant, K., Kalbarczyk, Z., & Iyer, R. (2002). Loose synchronization of multithreaded replicas. 21st IEEE Symposium on Reliable Distributed Systems, 2002. Proceedings.,

Moiseev, M. (2010). Defect detection for multithreaded programs with semaphore-based synchronization. 2010 6th Central and Eastern European Software Engineering Conference (CEE-SECR),

Zhai, K., Xu, B., Chan, W., & Tse, T. (2012). CARISMA: a context-sensitive approach to race-condition sample-instance selection for multithreaded applications. Proceedings of the 2012 International Symposium on Software Testing and Analysis,

 

                                                          Concurrent and Distributed Modeling   This post will cover thoughts and idea...