Saturday, August 6, 2022

  

                                                     Concurrent and Distributed Modeling

 

This post will cover thoughts and ideas regarding the study of concurrent and distributed technology. It will describe the presented materials and concepts.

Part 1:  Dual-Phase – Total Order Multicasting Diagram

Below is a two-phase multicasting diagram of total order that displays the message delivery sequence where node S sends a message named m1 to group members G1 and G2, and node S sends message m2 to G2 and G1, respectively. 

      Post application layer messaging:

1)      Message multicast is committed and ordered by the timestamp of commitment (B. I. Sandén, 2011b).

      Phase 1: All messages originate from S1 and get an ack with a delivery time suggested by either G1 or G2

      Phase 2: Messages committed with a commit time agreement

1)      That commit time > delivery time to ensure the integrity of order regarding all members (B. I. Sandén, 2011b).
 

      The diagram depicts the local decision to deliver a message prior to delivery to the application layer and where the decision lies concerning the associated group member.

1)      The diagram also shows when a message is delivered and is more significant than all pending messages needing delivery (B. I. Sandén, 2011b).


 

Part 2:  State diagram of a voting algorithm (Maekawa's)

Fig.2 diagram below illustrates the scenario where the node represented by "this" will execute a vote from received requests invoked from other nodes showing transitions and associated actions (Sanden, 1989).




 Distributed mutual exclusion diagram and essay

 

Threads and Safe Object:  Interaction

            Threading, especially multi-threading, was covered quite well in this course. The concepts gained from the course flow were detailed and broad when studying thread synchronization and the protocols required to access shared resources. In addition, a cornerstone of the study was to understand that a safe object is a catalyst to alleviate multiple thread intrusions of critical sections using messaging communication that detects thread intrusions (B. I. Sandén, 2011a).

            The most value I attained from the course study regarding multi-threading was the use of Semaphores and how they employ to acquire and release methods. I thought I was reasonably well versed in threading. However, I expanded on knowledge by studying why safe objects are crucial for concurrency (B. I. Sandén, 2009). Students are also accustomed to creating applications that employ atomic threaded functions, which affects performance under multi-threaded requirements. The study of safe objects ensures students are aware and can identify when multiple threads access a shared object. This particular section of the study covered several different approaches to safe objects. I have listed a few below.

·         Semaphore: As mentioned previously, it manages access control through acquiring and releasing resources.

·         Test and Set: Prevents crucial access to critical sections in multi-threaded operations, typically using 1 or 0 as the control mechanism

·         Reentrant (Locks): Simply put, it allows multiple thread locking for safe resource access

·         Preemptive Concurrency: This safety-based protocol allows multiple threads to swap in priority, alleviating overloading and increasing performance.

·         Priority Inversion: Ensures threads with the highest priority access a critical section or shared resource.

 

Entity Life Modeling:  Event Sequence Models, Deadlocks

            To summarize, this course regarding event sequencing models first requires to explain how the event sequence model is used to address specific issues. Since rules and models

had thread-event correlation, this course was instrumental in teaching the student the core meaning of the event sequencing model – a sight picture (B. Sandén, 1995).

            Event-sequencing starts with learning multi-threaded architecture and how event sequencing models depict a separate flow of execution. This course illustrated applicable concept models to show a multi-threaded system's states, events, and actions.

            Regarding research in this study, the course focused on exploring and explaining the sequential nature of event concepts with granular insights into the expectation of event occurrences and how they relate sequentially. In this course, we learned the concept of event sequencing and logically modeling events sequence to solve a common problem domain – deadlocks (B. Sandén, 1995).

            The course covered deadlocks in detail, especially the concept of identifying circularity in an event sequence. The course's valuable platform provided information about wait-chain diagrams depicting circular phenomenons that lead to various deadlocks and their consequences. The important vantage point during the identification of circularity started with understanding how many nodes were required to create deadlocks.  

Digital Clocks:  Distributed Mutual Exclusion Communication

            The course study covering Distributed Mutual exclusion provided a contrast between multi-threaded architecture and the relation to resource sharing. The core instruction aimed to provide insights into distributed concurrency concepts about multiple processing of simultaneous access to critical sections (B. I. Sandén, 2011b).

            One of the most exciting aspects of this study was using Digital Clocks and how it systemically manages devices in a distributed network. Specifically, the purposeful direction of the study detailed the processing of events and their referential integrity with time duration. The course paved the way for helpful knowledge that provided a conceptual understanding of order management of mutual processes within a logical period (Zhou, Li, Wang, Xue, & Feng, 2018).

            It was essential to understand Lamport's Clock Algorithm and its interactions with different aspects of   Distributed Mutual Exclusion. Conceptually. The core concepts were the fundamental rules that allowed dissection into the algorithm and how it leverages protocols to manage access to synchronization for distributed systems in visual acuity (Zhou et al., 2018). It is easy to take away a clear definition from this study of different hypotheses from the acquisition and release of locks and the release of critical sections.

            As a practitioner in the field who aims to use the knowledge of discovery from this course, I will take a different approach toward distributed RFID platforms, concurrency, and mutual exclusion of devices in a network landscape in the RFID field. Further analysis and discovery of how RFID devices communicate with referential co-located devices in the realm of concurrency are in the future.

Conclusion:  Concurrency, Distributed Systems, and Real-World

            Overall, the course was exciting. My current position as an RFID and IoT company owner allows the utilization of what I have learned as a practitioner in the field (Shull, 2015). Some of the tasks that I have accomplished in laboratories aligned perfectly with the course flow, direction, and goals. This post provided insights into concurrency that will have a genuine, significant impact in the field of distributed systems..

. 

References  

Sanden, B. (1989). An entity-life modeling approach to the design of concurrent software. Communications of the ACM, 32(3), 330-343.

 

Sandén, B. (1995). Resource sharing deadlock prevention. In Tutorial proceedings on TRI-Ada'91: Ada's role in global markets: solutions for a complex changing world (pp. 70-103).

 

Sandén, B. I. (2009). Multi-threading. Colorado Technical University, Colorado Springs.

 

Sandén, B. I. (2011a). Design of multi-threaded software: The entity-life modeling approach: John Wiley & Sons.

 

Sandén, B. I. (2011b). Simultaneous Exclusive Access to Multiple Resources.

 

Shull, C. L. (2015). Design-first distributed real-time RFID tracking system. In: Google Patents.

 

Zhou, Q., Li, L., Wang, L., Xue, J., & Feng, X. (2018). May-happen-in-parallel analysis with static vector clocks. Paper presented at the Proceedings of the 2018 International Symposium on Code Generation and Optimization.

 

Friday, May 6, 2022

 

GLS (General Least Square Model) Assumptions

                Tanaka and Huba (1989) discover that if the third or fourth criteria of the normal linear regression model are violated, i.e., if the random components do not have constant variance or correlate, the generalized least squares method (GLS) is utilized. This is indicative of a heterogeneous population made up of incredibly dissimilar units.

                The fundamental distinction between the normal generalized regression model and the U is a random component covariance matrix. The identity matrix C 0 is considered to be equivalent to the identity matrix in the typical model. The residuals' covariance (thus, variance and correlation) is presumed to be arbitrary in the generalized model; hence, the matrix C 0 might have arbitrary values. This is the essence of the normal model's generalization.

                The results of a generalized regression model using classic (conventional) OLS are consistent and unbiased. However, these computations become ineffective. As a result, the parameters of the generalized model are estimated using generalized least square models.

                A normal linear regression model's initial premise is that the explanatory variables x j (j = 1; m) are deterministic (non-stochastic). Cook and Weisberg (1994) assert that this implies that explanatory variables would remain static if the regression analysis were repeated. The dependent variable y value will change when the random component values in the new sample vary.

Other assumptions are:

         If all of a model's equations are correctly identified, it is said to be accurately identified.

         If there is at least one unidentified model among the model's equations, the model is considered unidentified.

         If there is at least one overidentified model among the model's equations, the model is termed overidentified.

         An equation is said to be precisely recognized if the coefficients of the simplified model can find the structural parameter estimates uniquely (uniquely).

         An equation is overidentified if more than one numerical value can be derived for some structural parameters.

         If estimations of an equation's structural parameters can't be found, it's considered unidentified.

Transforming Variable to Linear

                The structural form of the model describes a real phenomenon or process. Most often, natural phenomena or processes are so complex that systems of independent or recursive equations are not suitable for their description. Therefore, they resort to systems of simultaneous equations. The parameters of the structural form are called structural parameters or coefficients. MacKinnon and Magee (1990) find that some of the structural form equations can be represented in the form of identities, that is, equations of a given form with known parameters.

It is easy to move from the structural form to the so-called reduced form of the model. The reduced form of the model is a system of independent equations in which all the current endogenous variables of the model are predefined.

R procedures for Linear Regression

Linear regression using R comes in two flavors multiple and simple. Below is an example of the simple linear regression using R and R studio:

Linear regression calculation required only an atomic variable (independent).

Step A: R should be used to load the data.

For each dataset, follow these four steps: Go to File > Import dataset > From Text in RStudio (base).

summary (imported data)

When we call this function, we get a table in our console with a numeric summary of the data because both of our variables are quantitative. This gives us the independent variable's (var1) and dependent variable's (var2) lowest, median, mean, and maximum values.

Step B: Ensure the data assumptions are valid

We may use R to see if our data meets the four fundamental linear regression assumptions.

Observational independence (aka no autocorrelation)

                Jajo (2005) finds that there is no need to evaluate for hidden relationships among variables because there is only one independent variable and one dependent variable. Do not use a simple linear regression if autocorrelation is required within variables, for instance, numerous observations of the same study participant. Instead, use a structured model, such as a linear mixed-effects model.

Use the hist () function to see if the dependent variable has a normal distribution.

hist ("your data")

Histogram of simple regression

 

It is safe to proceed with the linear regression if the results show a bell curve with more observations in the distribution center and fewer on the tails.

Step C: Construct a linear regression model.

Perform a linear regression analysis to evaluate specific associations between variables (independent - dependent) if the data match the assumptions.

To see if the observed data matches our model assumptions, run plot(var):

Step D is the final step. To see how the results of the simple linear regression can be visualized, simply use the ggplot package and plot the data points on a graph.

 

 

 

References

 

Cook, R. D., & Weisberg, S. (1994). Transforming a response variable for linearity. Biometrika, 81(4), 731-737.

 

Jajo, N. K. (2005). A review of robust regression and diagnostic procedures in linear regression. Acta Mathematicae Applicatae Sinica, 21(2), 209-224.

 

MacKinnon, J. G., & Magee, L. (1990). Transforming the dependent variable in regression models. International Economic Review, 315-339.

 

Tanaka, J., & Huba, G. (1989). A general coefficient of determination for covariance structure models under arbitrary GLS estimation. British Journal of Mathematical and Statistical Psychology, 42(2), 233-239.

 

 Model First Explainable Artificial Intelligence (XAI)

Overview

Explainable artificial intelligence, or XAI, has recently sparked much interest in the scientific community. This study addresses the issue of complicated machines and algorithms failing to provide insight into methods or paradigms that can be leveraged before or during data input that will measure success through behavior and thought processes of unexplainable results. XAI makes aspects of users' and internal systems' decisions more transparent by offering granular explanations. Amparore et al. (2021) assert that these explanations are critical for guaranteeing the algorithm's fairness, detecting recursive training data issues, and promoting robust algorithms for predictions.

On the other hand, the initial process or technological "first step" that leads to interpretations is not standardized and has not been rigorously evaluated. Islam et al. (2021) discover that initial framework-based methods perform an excellent job of achieving explicit pre-determined outputs on the training data but do not reflect the nuanced implicit desires of design-modeling of integrated human components coalesced with input data is extremely rare with very little supporting literature. This research introduces basic structuring notions of modeling and illustrates how to utilize social constructs of human input and relevant data to build best practices and discover open challenges.

This study will also articulate why existing pre-emptive XAI paradigms on deep neural networks have significant advantages. Finally, the research will address possible future research directions of pre-emptive XAI structuring and modeling as a direct result of research discovery and findings (Chromik & Schuessler, 2020). 

Why Model First XAI Study Is Needed

XAI assumes that the end-user is given an explanation based on the AI system's decision, recommendation, or operation. Fellous et al. (2019) find that few conceptual models or paradigms in the form of models increase the likelihood of interpretability before or during the development and implementation of the system. A computer analyst, doctor, or lawyer, for example, could be one of the participants. On the other hand, as Booth (2020) suggests, teachers or C-level executives may be expected to explain the system and grasp what has to be fixed. Another user, on the other hand, could be judged to be biased against the system's fairness. Each user group can cause bias, which can lead to preferential or non-preferential interpretations, which can have a negative impact on the information and the conclusion.

            Before implementing XA1, a practical Model can give preliminary consideration to a system's intended user group, using their background knowledge and needs for the content by fusing explainability. Xu et al. (2019) instruct us that XAI is a well-integrated framework, yet it falls short due to its reliance on "interior" frameworks like explainability primarily through modeling. In addition, there are several third-party frameworks available, each of which covers a particular atomic scope of the XAI but none of which address the human interaction, data, or science components collectively through design modeling that determine the level of success in the resulting interpretability (Zeng, 2021).
Figure 1 Blog Post

Conceptual Structure-Abstraction and Encapsulation

            Many methods have been proposed to evaluate and measure the effectiveness of interpretation; however, as Ehsan and Riedl (2020) find, very few have been devised as model drivers that define interpretability valuation from the onset of XAI implementation. However, there are no general modeling paradigms to measure whether XAI systems will be more interpretable from concept to deployment (Ehsan et al., 2021). From a modeling point of view, metrics could be derived from conceptually represented feelings or behavior of participants, which unlocks patterns of subjective or non-subjective components of a description. Olds et al. (2019) state that abstract objective modeling can represent and communicate dependable and consistent measurements of XAI interpretation valuation.  A research question persists in modeling factors that directly affect interpretation to ascertain output valuation that drives success or failure.

            Design-first modeling fosters the examination and predictive structuring of the XAI component before applying any evaluation framework allowing common ground between the human participant and the training data. Gaur et al. (2020) assert that modeling capabilities and knowledge from a human-centered research perspective can enable XAI to go beyond explaining specific XAI systems and helping its users determine appropriate trust roles. In the future, XAI model first designs will eventually play an essential role in the deterministic valuation of the outputs.  Islam (2020) examines the XAI principle and states that the behavior of artificial intelligence should be meaningful to humans, but without model-first design, understanding and explaining in different ways may still become convoluted and cumbersome, especially for questions at different levels. (Islam, 2020)The model first AI ensures human practitioners can be factored in before the implementation of XA, establishing criteria for reliability through patterns because they possess existing subject matter knowledge of the injected data. Incorporating a lawyer into the data design characteristics model can determine the level of causality of interpretation of his client's actions and relative contributions of several court cases to check if his defense is conducive to the legal guidelines.

 

 

 

References

 

Amparore, E., Perotti, A., & Bajardi, P. (2021, 2021 Apr 16

2021-04-17). To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods. PeerJ Computer Science. https://doi.org/http://dx.doi.org/10.7717/peerj-cs.479

Booth, S. L. (2020). Explainable AI foundations to support human-robot teaching and learning Massachusetts Institute of Technology].

Chromik, M., & Schuessler, M. (2020). A Taxonomy for Human Subject Evaluation of Black-Box Explanations in XAI. ExSS-ATEC@ IUI,

Ehsan, U., & Riedl, M. O. (2020). Human-centered explainable ai: Towards a reflective sociotechnical approach. International Conference on Human-Computer Interaction,

Ehsan, U., Wintersberger, P., Liao, Q. V., Mara, M., Streit, M., Wachter, S., Riener, A., & Riedl, M. O. (2021). Operationalizing Human-Centered Perspectives in Explainable AI. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems,

Fellous, J.-M., Sapiro, G., Rossi, A., Mayberg, H., & Ferrante, M. (2019, 2019 Dec 13

2020-02-28). Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation. Frontiers in Neuroscience. https://doi.org/http://dx.doi.org/10.3389/fnins.2019.01346

Gaur, M., Desai, A., Faldu, K., & Sheth, A. (2020). Explainable AI Using Knowledge Graphs. ACM CoDS-COMAD Conference,

Islam, M. A., Veal, C., Gouru, Y., & Anderson, D. T. (2021). Attribution Modeling for Deep Morphological Neural Networks using Saliency Maps. 2021 International Joint Conference on Neural Networks (IJCNN),

Islam, S. R. (2020). Domain Knowledge Aided Explainable Artificial Intelligence (Publication Number 27835073) [Ph.D., Tennessee Technological University]. ProQuest One Academic. Ann Arbor. https://coloradotech.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/domain-knowledge-aided-explainable-artificial/docview/2411479372/se-2?accountid=144789

Olds, J. L., Khan, M. S., Nayebpour, M., & Koizumi, N. (2019). Explainable ai: A neurally-inspired decision stack framework. arXiv preprint arXiv:1908.10300.

Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., & Zhu, J. (2019). Explainable AI: A brief survey on history, research areas, approaches and challenges. CCF international conference on natural language processing and Chinese computing,

Zeng, W. (2021). Explainable Artificial Intelligence for Better Design of Very Large Scale Integrated Circuits (Publication Number 28719980) [Ph.D., The University of Wisconsin - Madison]. ProQuest One Academic. Ann Arbor. https://coloradotech.idm.oclc.org/login?url=https://www.proquest.com/dissertations-theses/explainable-artificial-intelligence-better-design/docview/2572576626/se-2?accountid=144789

 

                                                          Concurrent and Distributed Modeling   This post will cover thoughts and idea...