Social computing is an area of computer science that is concerned with the intersection of social behavior and computational systems. It is based on creating or recreating social conventions and social contexts through the use of software and technology. Thus, blogs, email, instant messaging, social network services, wikis, social bookmarking and other instances of what is often called social software illustrate ideas from social computing.
Sociological Foundations I
Social Translucence: An Approach to Designing Systems that Support Social Processes
In their first paper, “Social Translucence: An Approach to Designing Systems that Support Social Processes”, Erickson and Kellogg design systems that support communication and collaboration through computational networks for communication and collaboration among large groups of people. They propose such digital systems and will incorporate three features, visibility, awareness, and responsibility. The connections between coherence and space highlight the ability of people to use their experience and expertise to structure their interactions. In essence, an architecture that supports this social need will provide social translucency. Thus, demonstrating translucency in human society.
Social Translucence: An Approach to Designing Systems that Support Social Processes
Designing Social Translucence Over Social Networks
In his second paper, “Designing Social Translucence Over Social Networks”, Gilbert proposes a theory of social translucency and constructs Link Different. The design would allow for automatic calibration with the same people. This design would allow for the automatic calibration of relationships with peers. The principle of social translucency is thus followed. In these examples, it is evident that people often have a particular need for social translucency in their lives. People’s participation in online communities differs from their participation behavior in real-world collective settings. Every day humans are accustomed to using “social cues” to guide their decisions and actions, and as in the real world, providing social cues in virtual communities can help people better understand the situations they face in these environments. Ease their decision-making process by giving them more informed choices. Convince them to participate in the activities there and schedule their own personal and group activities more effectively.
Designing Social Translucence Over Social Networks
Sociological Foundations II
Predicting Tie Strength With Social Media
By reading the article “Predicting Tie Strength with Social Media”, Eric uses data from social networking sites to determine how close friends are to each other socially. The authors feel that social media sites today do not differentiate between users. Either they are friends, or they are not. They believe that even if they are all friends, there is still a distinction between close and distant relationships. We can estimate the strength of people’s connections through friend lists and interaction history. So, in this work. They analyze the interaction records between friends to infer their relationship with each other. The experimental results prove that their predicted model is still accurate. Seven measures of social relationship closeness were given. Predictor variables linear combination model, dimensional interaction, and network structure. Statistical methods were used to analyze these data. Quantitative measurement was obtained. They were compared with the results obtained from the user survey.
Predicting Tie Strength With Social Media
The Strength of Weak Ties
In the second article, “The Strength of Weak Ties”, Granovetter proposes the theory of strong and weak relationships. He argues that interpersonal networks can be divided into solid and weak networks. A strong relationship refers to the substantial homogeneity of individuals’ social networks. People are closely related to each other, and a solid emotional factor sustains the interpersonal relationship. Weak relationships are characterized by a firm heterogeneity of an individual’s social network. Granovetter believes that the strength of relationships determines the nature of the information obtained and the likelihood that the individual will achieve his or her action objectives. In the survey he did, American society is a weak relationship society. That is to say. The more people he knows from all walks of life, the easier it is for a person to do what he wants to do. Those with more fixed and narrow interactions are less likely to get things done.
Sociological Foundations III
An Experimental Study of the Small World Problem
In his first paper, “An Experimental Study of the Small World Problem”, Milgram explored how likely it was that any two people in the world could know each other. He did not look for a direct path for two people to meet each other. Instead, he focused on the intermediary that links the two people. So, the experiment was formally conducted. The ratio of observed and local clustering coefficients by the network was divided by the ratio of its observed and expected average path lengths after the effect of the first person on the relationship chain, the effect of race-ethnicity on the relationship chain, and many other possible factors on the relationship chain. It was confirmed that two random people in the world need only a few intermediaries between them to know each other. This method can be applied on a large scale in social structures. Social networks are expanded through interconnections between familiar people.
An Experimental Study of the Small World Problem
Structural Holes and Good Ideas
In his second paper, “Structural Holes and Good Ideas”, he investigated whether there is any connection between people’s position in social networks and the quality of their ideas. He found that the concept of structural holes provides evidence of this. In a fully connected social network, everyone is directly connected to everyone else. As a result, all kinds of information can spread from one person to another. In such a network, there are no structural holes. In the other and more common type of network, not everyone in the social network is directly connected to all other people. If this is the case, there is a structural hole, i.e., a structural incompleteness. In this case, the flow of information in the network is constrained by its structure. The content of the information available to each person in the network is no longer the same. Burt found that people located around structural holes have a tremendous advantage. This advantage, in turn, can often be attributed to the fact that the different types of information to which they are exposed lead to a more considerable imagination than others. The question boils down to the extent to which we have access to information, opinions, or perspectives that are broad-spectrum and diverse.
Structural Holes and Good Ideas
Identity
Identity and Deception in the Virtual Community
In her first paper, “Identity and Deception in the Virtual Community”, Judith Donath examines an article on the Internet’s Usenet to reveal ways to build identity in online interactions. She argues that the first way to form identity is to create an account name. Publishing articles online also helps to build identity. This is because the writing style, the content expressed, and the skillful use of abbreviations and code words can all indicate identity. In addition, a signature is included at the end of each article. This is also an essential means of establishing identity. The signature is a means that members use to demonstrate their interests, opinions, and careers. It is also a way to ensure one’s online credibility and responsibility by providing the name of the company one works for and one’s position in the company. In addition, the author’s home page in the signature is another means of establishing identity by providing a link to a detailed document on the homepage. The author can establish his identity in more detail.
Identity and Deception in the Virtual Community
4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community
In the second paper, “An Analysis of Anonymity and Ephemerality in a Large Online Community”, we provide two online ephemerality and anonymity studies. The “Random” section of the site is also known as “/b/“. It was the first section to be created and received the most traffic. While researchers and practitioners often consider user identity and data persistence to be core tools in designing an online community, they are also often considered the most important. This is despite the almost complete anonymity and extremes. This suggests that interesting images and link exchanges dominate communities. We found that most threads spent only five seconds on the first page. They describe alternative mechanisms that /b/ participants use to establish state and build interactions.
4chan and /b/: An Analysis of Anonymity and Ephemerality in a Large Online Community
Disclosure and Regulation
The Presentation of Self in the Age of Social Media: Distinguishing Performances and Exhibitions Online
In his first paper, “The Presentation of Self in the Age of Social Media”, Hogan proposed the two-factor theory. It argued that emotions are an essential intrinsic factor affecting individual job satisfaction. Briefly, Hogan argues that the significant difference between online social interaction and genuine social interaction in terms of mimesis is that online social interaction cannot be limited to the visitor’s time, place, and social context. This makes the front-end of communication software completely unpredictable. Thus, Hogan argues that in the age of online media, the role of the human being in the social scene has changed from that of a performer to that of a curator. They can only show the side of themselves that they are most comfortable with by showing them in a disciplined manner. To avoid a situational breakdown when encountering unknown visitors.
Anonymity and Self-Disclosure on Weblogs
In their second paper, “Anonymity and Self-Disclosure on Weblogs”, Qian and Scott found that. People reveal too much personal information, especially negative information about friends, employers, and others, in blogs they post online. This often leads to problems in building constructive interpersonal relationships within and across cultures. More importantly, the widespread use of new media may lead to the loss of certain low cultures. Even in the same period, for various reasons, nation-states are not at the same stage of development. New media are the product of highly developed productivity. Human beings always tend to advance things. Therefore, the new media will conceal the enthusiasm of the public to mobilize and accept the high culture. It is possible to reject your own culture. Moreover, they think that it is their own culture that lags behind the development of society. Furthermore, thus, to some extent, they lose the culture of their people.
Anonymity and Self-Disclosure on Weblogs
Social Capital and Influence
Everyone’s an Influencer: Quantifying Influence on Twitter
Duncan Watts is supposed to be an academic at the forefront of social science computing, and Watts’ team counted 74 million messages from 1.6 million Twitter accounts. They found that if publicity is conducted, choosing more accounts with less influence to spread will be greater than choosing a small number of influential accounts to spread. Based on this study, Watts believes that celebrities do not drive trends, but instead, trends drive celebrities. Nevertheless, after all, popular trends are difficult to quantify and record, so history has only left the footprints of celebrities. If the premise of influence needs to be counted, every user will be part of influencing the Internet, even though no one notices or is paying attention. They obtained these conclusions by fitting a regression tree model to the training and prediction. The focus should be more on the overall influence factor than on a particular or exceptional individual at an objective level.
Everyone’s an Influencer: Quantifying Influence on Twitter
The Benefits of Facebook “Friends:” Social Capital and College Students’ Use of Online Social Network Sites
The second paper explores the purpose and idea of Facebook.com for current college students to use. In particular, it is used to make new friends or communicate and connect with offline friends online. The examples in the paper are statistics and analysis from a survey of college students at Michigan State University. The benefits of Facebook for college students are described in detail. For example, Facebook provides more outstanding help for those who experience low self-esteem and low life satisfaction. They have strong contrasts between real life and the Internet. It was also concluded that Facebook’s bridging social capital indicators strongly connect to reality. By contrast, the ordinary Internet does not predict the accumulation of social capital, but socially intensive Facebook allows for interpersonal and social capital accumulation. In terms of environmental change and interpersonal complexity, having a social network in which reality and virtual existence intersect will receive increasing attention and inspiration.
Social System Design I
Making Sense of Group Chat through Collaborative Tagging and Summarization
Before reading this essay, I had a feeling in social groups: group chat is tough to use! If you miss the message often have to check many messages upward. Sometimes even hundreds of messages are read one by one. They were worried about missing some critical information to solve the problem that everyone in the group chat is constantly challenged to pay attention to the information. This article proposes a feature that allows people to summarize, label, and organize different messages. Use the power of group mates to help organize messages. They were used to avoid others missing key messages. In particular, Tilda, a prototype system designed for Slack, was developed to use tags left by participants. These summaries become markers that can be edited, referenced, and posted to specific channels. Users can keep track of the content of interest. Especially by posting on social media, for this study, the authors show that Tilda can make teams and individuals more personalized and adaptable to their needs. A special note is also made in the direction of automation. The aim is to achieve an automatic summary of the chat content. The authors likewise look at the future of the work chat model, envisioning the integration of various functions that will be implemented.
Making Sense of Group Chat through Collaborative Tagging and Summarization
Soylent: A Word Processor with A Crowd Inside
This article describes the architecture and interaction patterns used to directly integrate human contributions obtained from crowdsourcing into the user interface. Among the advantages of natural language, “group script recording” is implemented. The text semantics of the article is recognized and read. Soylent users can quickly make arbitrary work requests in human language through it. The article gives several examples of application scenarios in which the quality of the article is filtered to arrive at a high or low quality. It is not just about typos and spelling errors in articles. What surprised me was the ability to identify the missed text as well. According to the article, I learned that this identification type is done by finding, correcting, and verifying. The workflow is visualized in three stages. I think this is an open “human macro”. It is a way to go beyond the previous Wizard of Oz approach. It brings out the concept of natural language processing. Incorporate different processing flows and then process and automate the interaction model.
Soylent: A Word Processor with A Crowd Inside
Social System Design II
Why CSCW Applications Fail: Problems in the Design and Evaluation of Organizational Interfaces
In the first article, “Why CSCW Applications Fail: Problems in the Design and Evaluation of Organizational Interfaces”, the authors analyze several computer-aided collaborative applications. These applications are often criticized, including why the designed systems tend to fail. Academically, there have been many papers published on computer-assisted collaborative cooperation and computer-assisted collaborative learning, with the widespread implementation of parallel computing engineering and the rapid growth of the computer CSCW field. People are looking to incorporate architectural design concepts to solve traditional structural design problems and, for example, to guide the schematic design of structures and adopt supported collaborative work systems. Thus, making the material and time optimal. Furthermore, the approach of combining CSCW technology. To develop computer-supported collaborative design systems. Therefore, it becomes essential to study related products that support collaborative design. To the current structural solution, automation satisfies all requirements of the specification.
Why CSCW Applications Fail: Problems in the Design and Evaluation of Organizational Interfaces
An Open, Social Microcalendar for the Enterprise: Timely?
The second article, “An Open, Social Microcalendar for the Enterprise: Timely?” presents the system design and rationale for a new social micro calendar called Timely. The authors analyze the calendar system in the new social media to find out something about open access, social interaction, and discoverability. Furthermore, they combine them with a human-centered sharing model. By evaluating groupware calendar systems for individual and group time management and scheduling on social networks in the way of enterprise social software and family calendars and, in comparison, balancing the timeliness, open access, and discoverability of the system. More focus on user-centric event sharing in the authors’ ideas. This is the core concept of the calendar system. The focus is on user-centric and structured analysis in many data sets collected. I think the author’s idea is a practice that is applied to applications in social media. However, it still lacks some integration tools and language parsing. Nevertheless, it does not affect that it is an open work.
An Open, Social Microcalendar for the Enterprise: Timely?
Language Analysis I
Diurnal and Seasonal Mood Vary with Work, Sleep, and Daylength Across Diverse Cultures
In the first article, “Diurnal and Seasonal Mood Vary with Work, Sleep, and Daylength Across Diverse Cultures”, researchers looked at 2.4 million Twitter users from 84 countries. Twitter users from 84 countries. More than 400 posts per person were analyzed. A standard procedure was used to analyze what words they used. Positive words included excellent and agree, negative words included annoy and afraid, and the results showed that people from different cultures had similar daily mood rhythms. In the morning, emotions were high. Then they were depressed during the day. At night, when it was time to go to bed, they were high again. Moreover, seasonal mood changes were associated with changes in day length. In addition, most of the bloggers were in a good mood on Saturday and Sunday. This may be related to low work stress and sufficient sleep. Identify the mood of the reactions in it for analyzing bloggers’ moods, however, despite having a large sample of data. Nevertheless, the representativeness of which is not sufficient—for example, the analysis is based on age groups. Younger people are more aware of the internet at different age levels. They also have a variety of reasons for posting.
Diurnal and Seasonal Mood Vary with Work, Sleep, and Daylength Across Diverse Cultures
Personality, Gender, and Age in the Language of Social Media: The Open-Vocabulary Approach
In the second article, “Personality, Gender, and Age in the Language of Social Media: The Open-Vocabulary Approach”, Schwartz uses LDA-extracted thematic features extracted by LDA to construct a function of Big Five personality traits. Several associations were found between personality traits and topic use. For example, emotionally stable people mentioned more sports and life activities. Extroverts are more associated with partying, etc. Language on social media is a wealthy database for studying personality traits. Schwartz also used the results of an n-gram topic model and words to construct a regression model using the constantly updated status of 75,000 Facebook users and to predict the relationship between users’ mental states and changes over time. The model was used to estimate the change in users over the seasons with the rapid development of the Internet. A large amount of data has been accumulated on social media that can be used to predict personality. This approach outperforms other studies in this field.
Personality, Gender, and Age in the Language of Social Media: The Open-Vocabulary Approach
Language Analysis II
Language from Police Body Camera Footage Shows Racial Disparities in Officer Respect
The first article, “Language from Police Body Camera Footage Shows Racial Disparities in Officer Respect”. This paper uses video taken from a body camera by analyzing the language from police body camera footage of U.S. police officers. It is found that police officers generally do not treat black people with the same respect as white people when stopping and checking cars. Police language was analyzed for the level of respect shown to White and Black community members in everyday traffic. It was found that even after controlling for the race of the police officer, the severity of the offense, the location of the stop, and the outcome of the stop. Police officers’ respect for Black and White community members was consistently lower. I argue that data collected in natural settings inevitably contain bias (sexism, racial discrimination, etc.). Training deep neural networks with this data will lead to biased model predictions. So, it is even more critical for me to use NLP techniques to study many vital questions in social sciences. However, we must, at the same time, face these ethical dilemmas and social challenges in the field of learning. In this regard, this paper gives us the pioneering ideas we need to weigh accuracy against bias.
Language from Police Body Camera Footage Shows Racial Disparities in Officer Respect
You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech
The second article, “You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech”. This paper explores how banning hate speech on a website only pushes people to other websites. The findings provide some evidence that banning strategies can help reduce the occurrence of such behavior in large niche communities like Reddit. In comparison, banning hate speech can reduce controversial content on some platforms. The same user posted 80-90 percent less hate speech, and a significantly higher percentage of blocked community users left Reddit than the comparison group. However, this hate speech is still present and can get worse. The paper emphasizes that the act of banning speech does not make the entire Internet safer or less hateful. It only makes these users come to other platforms to speak. They were cutting these platforms even more out of control.
You Can’t Stay Here: The Efficacy of Reddit’s 2015 Ban Examined Through Hate Speech
Online Content Moderation I
Crossmod: A Cross-Community Learning-based System to Assist Reddit Moderators
In a paper, “Crossmod: A Cross-Community Learning-based System to Assist Reddit Moderators”, the authors propose and introduce an AI-based audit system for Reddit. This system is an approach that leverages many previous moderator decisions through a collection of classifiers. Moreover, Crossmod uses a hybrid active approach that allows subreddit moderators to augment automated predictions from cross-community learning with manual decision making and supervision. The authors also conducted a formative interview study for 11 mods across ten subreddits. Through their interviews, it was found that mods need tools to adapt and learn. Crossmod’s machine learning backend leverages cross-community learning. They wrapped the backend in a socio-technical architecture that fits into existing workflows and practices. Excellent practical results of such a system were obtained based on feedback received from moderators, with an overall accuracy rate of 86% in detecting comments that moderators would delete. However, the authors suggest that moderators report that these comments should be deleted in some cases, but the current socio-technical review architecture fails to help them do so. Such a system fills a missing tool for Reddit moderators, using machine learning tools for proper comment review.
Crossmod: A Cross-Community Learning-based System to Assist Reddit Moderators
Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation
In the second paper, “Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation”, the authors propose and develop a new tool, Squadbox, which is a moderator’s It is a “friend source” for moderators to filter messages and support people who are harassed online. The authors argue that if a blogger wants to have a public email address to receive alerts and at the same time wants to avoid receiving hate mail from strangers. Then he could create a Squadbox account and use two of her colleagues as moderators. Such a tool can create allowlists for pre-approved email senders. Squadbox also rates the toxicity level of each message. to help moderators review emails. However, Squadbox only deals with emails for now, but I believe it eventually expands to other social media platforms. Because this is a way to help provide a new way of thinking for a hybrid solution.
Squadbox: A Tool to Combat Email Harassment Using Friendsourced Moderation
Online Content Moderation II
Synthesized Social Signals: Computationally-Derived Social Signals from Account Histories
In the first paper, “Synthesized Social Signals: Computationally-Derived Social Signals from Account Histories”, the authors propose how to know before communicating online that the current account is at risk. However, this is very difficult. The authors use the user’s profile, picture, bio, location, and other information as social signals. Some fields filter this signal, and a system called Sig is built. This system can obtain many tweets and perform an algorithm to derive a threshold value. This threshold can judge whether the user is harmful or not. Experiment 11 volunteers using Twitter were recruited to experiment. The volunteers experienced the Sig system and gave feedback. The system can flag accounts and confirm the accuracy of information. It provides enough information and advice to users. This allows people to save more energy when browsing social media platforms. Feedback from volunteers can confirm the system and make them feel more comfortable reading on social media platforms. I think this model could be made more popular and hierarchically offered to users.
Synthesized Social Signals: Computationally-Derived Social Signals from Account Histories
Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions
In the second paper, “Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions”, the authors’ team conducts A study on the automatic generation of variant words. The study focuses on how users circumvent censorship by using homophonic and heterophonic variant words and tries to generate many new variant words using non-deterministic algorithms. Moreover, through two comparative experiments, if homophones in posts are deliberately targeted for transformation, this will consume many resources and algorithm runtime. In my opinion, if it is a deliberate human behavior operation, only a manual behavior review can be taken for retrieval. Operations such as machines and algorithms do not have excellent solutions for these situations quickly and effectively. I suggest calling for manual screening and reporting in a large user base of microblogs. Because in the experiment, native Chinese users can quickly reflect on the content and information of posts with homophonic characters. Furthermore, the accuracy rate can reach 99%.
Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions
Credibility and Misinformation
Tweeting is Believing? Understanding Microblog Credibility Perceptions
In the first paper, “Tweeting is Believing? Understanding Microblog Credibility Perceptions”, the authors analyze the results of tweet credibility perceptions and examine the impact of feature users on credibility assessment. The authors analyze the impact of tweeting on credibility assessment. Some scholars have explored the credibility evaluation of tweets from a cognitive psychological perspective. They found that college students pay much attention to the credibility of information when browsing tweets. So much so that the study explored the credibility evaluation of tweet messages. The importance of members’ perceived credibility on their behavior in the tweeting community was confirmed. The authors systematically evaluated the impact of several features of tweet posts on credibility ratings through experiments. They found that it is difficult for users to judge the authenticity of a post based on its content alone. They are influenced by heuristic factors such as username. Therefore, tweet authors can use specific strategies to improve the credibility of posts in readers’ eyes. For example, increasing user credibility utilizing some means such as authentication.
Tweeting is Believing? Understanding Microblog Credibility Perceptions
The Spread of True and False News Online
In the second paper, “The Spread of True and False News Online”, the authors argue that lies spread faster, deeper, and farther than the truth. Farther. Moreover, they cite the example that it takes six times longer for trustworthy news to reach 1,500 people on Twitter than false news. Rumors in the political category spread more quickly than in all other categories. This is followed by urban legends, business, terrorism, science, entertainment, and natural disasters. Nevertheless, users who spread trustworthy news have more followers, tweet more, and use it for longer. The authors have removed retweets spread by bots through bot detection technology. The results still proved to be roughly the same. Among the main reasons for this are many, including that fake news is more story-based. These stories are more likely to trigger emotions such as fear and disgust in humans. So much so that people are also more willing to spread fake news. Also, fake political news is more deeply rooted than, for example, terrorism and natural disasters.
The Spread of True and False News Online
Politics and Polarization
Echo Chambers Online?: Politically Motivated Selective Exposure Among Internet News Users
This paper first discusses the concept of Echo chambers and the reasons for their formation. It also performs a robustness check by studying online tracking technology. A reasonable conjecture is made, and the feasibility is confirmed. The authors conclude by comparing the more distinctive features of Conservative and Liberalism, respectively, and by statistically analyzing the data of recruited volunteers in terms of their reading interests, reading hours, and reading feelings. When choosing media, individuals are likely to access information consistent with their beliefs more frequently. Nevertheless, at the same time, they are also less likely to avoid information that contradicts their views actively. In general, we focus on the impact of polarization in the United States in social media and extend the study.
Echo Chambers Online?: Politically Motivated Selective Exposure Among Internet News Users
The Political Blogosphere and the 2004 U.S. Election: Divided They Blog
This paper centered around representativeness and what exactly polarisation means. We briefly discussed what information we could pull from a non-representative sample and how we might make it more representative. We also discussed whether or not between-community links were trivial or not and seemed to conclude that they were nontrivial. Finally, we discussed some evidence against the “filter bubble” hypothesis in contrast to the paper’s claims.
The Political Blogosphere and the 2004 U.S. Election: Divided They Blog
Big Data
Data ex Machina: Introduction to Big Data
In the first paper, “Data ex Machina: Introduction to Big Data”, the authors argue the development, diffusion, and application of a new generation of digital technologies based on Big Data, artificial intelligence, and the Internet of Things. This will make human social life more exposed to the digital environment and use digital systems to mediate social interactions for system-level data. I think people may think that data represent everyone. The era of computational social science is coming. Network development and research accumulation have led to the widespread use of machine learning, which allows computers to work with data. As a new discipline was combining social science, computer science, mathematical modeling, and statistics, that invents new tools for dealing with complex data. It is significant for using big data to accomplish knowledge discovery, theory exploration, and validation. The creation of big data and the development of computer technology have brought significant attention to social computing. The emerging social computing utilizes, collects, and analyzes data with unprecedented breadth, depth, and scale. This has resulted in a new paradigm of hybrid theory and data-driven research.
Data ex Machina: Introduction to Big Data
The Parable of Google Flu: Traps in Big Data Analysis
In the second paper, “The Parable of Google Flu: Traps in Big Data Analysis”, the authors cite several details of GFT’s data processing to support their argument. The most common explanation for the bias in GFT predictions prior to the article was that media coverage caused more flu-related searches by people who were not sick themselves during the flu. This led to higher estimates of flu-like cases this year. The authors investigate the problem of inaccurate Google epidemic predictions in more depth. The nature of the pitfalls of big data is also discussed. The analysis of big data is complex, but due to collecting big data. It is challenging to ensure that there is as meticulous as traditional data. Inevitably, there will be inaccuracies, and I think that the inaccuracy of Google flu trends is used as an example to point out the root problem of big data. For example, policies and regulations related to data security management and other aspects are imperfect. It is challenging to balance data openness and privacy, and the utilization of big data faces the challenge of ethical issues.
The Parable of Google Flu: Traps in Big Data Analysis
Predictions and Forecasting
Private Traits and Attributes are Predictable from Digital Records of Human Behavior
In the first paper, “Private Traits and Attributes are Predictable from Digital Records of Human Behavior”. The paper is based on modeling Facebook users like data to predict their personality traits. The dataset used in the paper is myPersonality, a third-party personality test widget developed by Facebook. myPersonality uses classical psychological scales to measure the personality traits of users. For example, openness, stability, etc. The authors found that just the users like data. The algorithmic model can accurately predict the user’s particular personality and demographic traits. For example, the model can predict race and gender with over 90% accuracy. The models also project some traits that are only correlated and not causal. For example, people like curly fries are likely to have relatively high IQs. People who like Sephora may have a relatively low IQ. People who like swimming, the Bible, and Pride and Prejudice have higher levels of life satisfaction. Moreover, people who liked Science magazines and iPods were less satisfied with their lives. Based on this, I believe that behavioral prediction is an essential milestone in the AI area. With the widespread use of personality tests. Researchers have begun to explore more objectified and ecologically valid personality assessment methods.
Private Traits and Attributes are Predictable from Digital Records of Human Behavior
Exploring Limits to Prediction in Complex Social Systems
In the second paper, “Exploring Limits to Prediction in Complex Social Systems”, the authors argue that for a considerable period. The enormous complexity of social systems and the limited information, such as the lack of data and models, have made social science research on prediction unattainable for a long time. This has contributed to the unattainability of social science research on prediction. From a disciplinary history perspective, prediction of social phenomena or processes. It has always been missing in social science research, and even a standard norm and common methodology have not yet been developed. The combination of big data and artificial intelligence has made monitoring and predicting human group behavior a reality. In the context of this technology, there is a more significant concern that human privacy is no longer protected. If this technology is exploited by power, more loss of personal rights will follow. I think the limits of some of the predictions in social media are often very biased. This can lead to systems that cannot make the limited predictions from a technical level.
Exploring Limits to Prediction in Complex Social Systems
Ethics and Privacy
Experimental Evidence of Massive-scale Emotional Contagion through Social Networks
In the first article, “Experimental Evidence of Massive-scale Emotional Contagion through Social Networks”, the authors expose Facebook’s covert emotional testing. The authors reveal a covert test of emotion conducted by Facebook. The authors observed whether information on social networks affects emotions by adjusting the content users see every day. The authors claim that the experiment complies with Facebook’s data use regulations. A particular algorithm achieves dynamic information in Facebook. What does this algorithm want to do most is to make you happy? What information about your friends and neighbors do you want to see the most? What type of information would you be most interested in? And so on. Ultimately, the study concluded that emotions could spread through interpersonal networks on the Internet. Users who are cut out of the stream of positive information become depressed and increasingly depressed. Moreover, those who are cut out of the stream of negative information become positive. And more and more positive. In the current situation, the study of interpersonal relationships in online virtual spaces has been a topic of great interest to social scientists in recent years.
Experimental Evidence of Massive-scale Emotional Contagion through Social Networks
Data, Privacy, and the Greater Good
In the second paper, “Data, Privacy, and the Greater Good”, the authors use machine learning to infer health conditions and risks from non-medical data across information and social contexts. They analyze posts on Twitter and Facebook to determine whether new mothers are at risk for postpartum depression. At the same time, this involves large-scale aggregated analysis of anonymous data. Whether such interventions involving private information align with norms or social norms remains a question. The risks to personal privacy posed by data sharing are discussed in detail in the text, and sensitive knowledge can be inferred from benign data shared routinely and promiscuously. These pose difficulties for the current legal approach to privacy protection in the United States. For this reason, the authors argue that an informed discussion between policymakers and the public about data and machine learning capabilities will lead to insightful designs for procedures and policies. These designs can balance the goals of protecting the privacy and ensuring fairness with the interests of harvesting scientific research, individuals, and public health.