I am aware that I have a lot of work to do on the website - I have simply decided on the name that I, the interviewer will use and the title of my media product. I have lots of ideas in my head that I will utilise, and I have been attempting to create my own logos.
https://cassimac02.wixsite.com/websi
A blog depictng my work for my A level media studies NEA, 2020-2021
Thursday, July 9, 2020
Thursday, July 2, 2020
Own general research: A failure to regulate?
A Failure to Regulate? The Demands and
Dilemmas of Tackling Illegal Content and
Behavior on Social Media
'The proliferation and user uptake of social media applications has brought in its wake a growing problem of illegal and harmful interactions and content online. In the UK context, concern has focused in particular upon (a) sexually-oriented content about or directed to children, and (b) content that is racially or religiously hateful, incites violence, and promotes or celebrates terrorist violence. (I will take this into account, and use this to focus my points and ideologies on) Legal innovation has sought to make specific provision for such online offences, and offenders have been subject to prosecution in some widely-publicized cases. Nevertheless, as a whole, the business of regulating (identifying, blocking, removing, and reporting) offending content has been left largely to social media providers themselves. This has been sustained by concerns both practical (the amount of public resource that would be required to police social media) and political (concerns about excessive state surveillance and curtailment of free speech in liberal democracies). However, growing evidence about providers’ unwillingness and/or inability to effectively stem the flow of illegal and harmful content has created a crisis for the existing self-regulatory model. Consequently, we now see a range of proposals that would take a much more coercive and punitive stance toward media platforms, so as to compel them into taking more concerted action. Taking the UK as a primary focus, these proposals are considered, with a view to charting possible future configurations for tackling illegal social media content.'
'The proliferation and user uptake of social media applications has brought in its wake a growing problem of illegal and harmful interactions and content online. In the UK context, concern has focused in particular upon (a) sexually-oriented content about or directed to children, and (b) content that is racially or religiously hateful, incites violence, and promotes or celebrates terrorist violence. (I will take this into account, and use this to focus my points and ideologies on) Legal innovation has sought to make specific provision for such online offences, and offenders have been subject to prosecution in some widely-publicized cases. Nevertheless, as a whole, the business of regulating (identifying, blocking, removing, and reporting) offending content has been left largely to social media providers themselves. This has been sustained by concerns both practical (the amount of public resource that would be required to police social media) and political (concerns about excessive state surveillance and curtailment of free speech in liberal democracies). However, growing evidence about providers’ unwillingness and/or inability to effectively stem the flow of illegal and harmful content has created a crisis for the existing self-regulatory model. Consequently, we now see a range of proposals that would take a much more coercive and punitive stance toward media platforms, so as to compel them into taking more concerted action. Taking the UK as a primary focus, these proposals are considered, with a view to charting possible future configurations for tackling illegal social media content.'
Own general research: Internet regulation
February, 2020
'The government is to outline new powers for the media regulator Ofcom to police social media.
It is supposed to make the companies protect users from content involving things like violence, terrorism, cyber-bullying and child abuse.
Companies will have to ensure that harmful content is removed quickly and take steps to prevent it appearing in the first place.
They had previously relied largely on self-governance. Sites such as YouTube and Facebook have their own rules about what is unacceptable and the way that users are expected to behave towards one another.'
This is interesting for me, as this sets guidelines for social media sites that I am not going to be touching on in much detail. This regulation applies more to streaming sites such as Youtube and less to the more popular sites, such as Twitter and Instagram, which are the most popular within my demographic. Therefore, this article tells me that not much is being done towards the censorship of the particular sites I will focus on, which is an idea that I can incorporate into my website/documentary.
Own general research: Illegal activities on social media
Illegal or restricted content
Like other media, content placed on the internet may be illegal, infringing or prohibited content under various state or Commonwealth laws. Certain online content may also be classified as 'prohibited' such as: Child pornography. Instructions in Crime, violence or drug use. - These are the main topics that I will be touching on in my doscumentary, as I personally find it incredible how easily these things can be accessed, especially on Twitter, Snapchat and Instagram, which are the top-used social media sites of my target audience.
Twitter, I believe, is the most problematic. I find it too easy to access content that should be restricted, but users have found loopholes and way to get past the rules and regulations.
Twitter rules/guidelines/policies:
Safety
Violence: You may not threaten violence against an individual or a group of people. We also prohibit the glorification of violence. Learn more about our violent threat and glorification of violence policies.
Terrorism/violent extremism: You may not threaten or promote terrorism or violent extremism.
Child sexual exploitation: We have zero tolerance for child sexual exploitation on Twitter.
Abuse/harassment: You may not engage in the targeted harassment of someone, or incite other people to do so. This includes wishing or hoping that someone experiences physical harm.
Hateful conduct: You may not promote violence against, threaten, or harass other people on the basis of race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.
Suicide or self-harm: You may not promote or encourage suicide or self-harm.
Sensitive media, including graphic violence and adult content: You may not post media that is excessively gory or share violent or adult content within live video or in profile or header images. Media depicting sexual violence and/or assault is also not permitted.
Illegal or certain regulated goods or services: You may not use our service for any unlawful purpose or in furtherance of illegal activities. This includes selling, buying, or facilitating transactions in illegal goods or services, as well as certain types of regulated goods or services.
Privacy
Private information: You may not publish or post other people's private information (such as home phone number and address) without their express authorization and permission. We also prohibit threatening to expose private information or incentivizing others to do so.
Non-consensual nudity: You may not post or share intimate photos or videos of someone that were produced or distributed without their consent.
I have, using the information above, conducted a poll on my social media to see how many Twitter users have been subjected to viewing any of the activities listed above. I will share the results when I have put them altogether.
Subscribe to:
Posts (Atom)