Binary Options Academy - your source for binary options ...

An introduction to Linux through Windows Subsystem for Linux

I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.

An introduction to Linux through Windows Subsystem for Linux

GitHub Pages link

Introduction and motivation

tl;dr skip to next section
So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.

Setting up WSL

So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.

1. Installing WSL

Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help 
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.

2. Installing Ubuntu

Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.
![Windows Store](https://theshepord.github.io/intro-to-WSL/docs/images/winstore.png) Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did.
Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you.
Finally, to upgrade all your packages, type in
sudo apt-get update 
And then
sudo apt-get upgrade 
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.

3. Making things nice and crispy: an introduction to UNIX-based filesystems

tl;dr skip to the next section
The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Print Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them. This is a convention for specifying files that should be hidden by default, and ls, as well as most other commands, will honor this convention. Anyways).
So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A.
For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!

4. Changing your default home folder

Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, which out-of-the-box functions kind of like a crummy Notepad (you can customize it infinitely though, and some people have insane vim setups. Appendix B has more info). /etc/passwd is a plaintext file that historically was used to store passwords back when encryption wasn't a big deal, but now instead stores essential user info used every time you open up WSL.
Anyway, once you've typed that in, your shell should look something like this: ![vim /etc/passwd](https://theshepord.github.io/intro-to-WSL/docs/images/vim-etc-passwd.png)
Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
theshep:x:1000:1000:,,,:/home/pizzatron3000:/bin/bash 
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
theshep:x:1000:1000:,,,:/mnt/c/Users/lucas:/bin/bash 
Next, press esc to exit insert mode, then type in the following:
:wq 
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file.
Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...

5. Importing your configuration files into the new home directory

Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//. ~ 
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the . at the end is cp-specific syntax that lets it copy anything, including hidden files, and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provide commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!

6. Tips & tricks

Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit).
alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. Newer versions of rm fail when you type this in, but don't push your luck. You've been warned. Be careful.
export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.

Appendix A: brief intro to top-level UNIX directories

tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
  • bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
  • boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
  • dev: devices, provides files that allow Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
  • etc: no idea why it's called etc, but it contains system-wide configuration files
  • home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
  • lib: libraries used by the system
  • lib64 64-bit libraries used by the system
  • mnt: mount, where your drives are located
  • opt: third-party applications that (usually) don't have any dependencies outside the scope of their own package
  • proc: process information, contains runtime information about your system (e.g. memory, mounted devices, hardware configurations, etc)
  • run: directory for programs to store runtime information.
  • srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
  • sys: system, provides information about different I/O devices to the Linux Kernel. If dev files allows you to access I/O devices, sys files tells you information about these devices.
  • tmp: temporary, these are system runtime files that are (in most Linux distros) cleared out after every reboot. It's also sort of deprecated for security reasons, and programs will generally prefer to use run.
  • usr: contains additional UNIX commands, header files for compiling C programs, among other things. Kind of like bin but for less important programs. Most of everything you install using apt-get ends up here.
  • var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.
Also keep in mind that all of this is just convention. No Linux distribution needs to follow this file structure, and in fact almost all will deviate from what I just described. Hell, you could make your own Linux fork where /mnt/c information is stored in tmp.

Appendix B: random resources

EDIT: implemented various changes suggested in the comments. Thanks all!
submitted by HeavenBuilder to linux4noobs [link] [comments]

Red Hat OpenShift Container Platform Instruction Manual for Windows Powershell

Introduction to the manual
This manual is made to guide you step by step in setting up an OpenShift cloud environment on your own device. It will tell you what needs to be done, when it needs to be done, what you will be doing and why you will be doing it, all in one convenient manual that is made for Windows users. Although if you'd want to try it on Linux or MacOS we did add the commands necesary to get the CodeReady Containers to run on your operating system. Be warned however there are some system requirements that are necessary to run the CodeReady Containers that we will be using. These requirements are specified within chapter Minimum system requirements.
This manual is written for everyone with an interest in the Red Hat OpenShift Container Platform and has at least a basic understanding of the command line within PowerShell on Windows. Even though it is possible to use most of the manual for Linux or MacOS we will focus on how to do this within Windows.
If you follow this manual you will be able to do the following items by yourself:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying the Mediawiki application
What is the OpenShift Container platform?
Red Hat OpenShift is a cloud development Platform as a Service (PaaS). It enables developers to develop and deploy their applications on a cloud infrastructure. It is based on the Kubernetes platform and is widely used by developers and IT operations worldwide. The OpenShift Container platform makes use of CodeReady Containers. CodeReady Containers are pre-configured containers that can be used for developing and testing purposes. There are also CodeReady Workspaces, these workspaces are used to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.
The OpenShift Container Platform is widely used because it helps the programmers and developers make their application faster because of CodeReady Containers and CodeReady Workspaces and it also allows them to test their application in the same environment. One of the advantages provided by OpenShift is the efficient container orchestration. This allows for faster container provisioning, deploying and management. It does this by streamlining and automating the automation process.
What knowledge is required or recommended to proceed with the installation?
To be able to follow this manual some knowledge is mandatory, because most of the commands are done within the Command Line interface it is necessary to know how it works and how you can browse through files/folders. If you either don’t have this basic knowledge or have trouble with the basic Command Line Interface commands from PowerShell, then a cheat sheet might offer some help. We recommend the following cheat sheet for windows:
Https://www.sans.org/security-resources/sec560/windows\_command\_line\_sheet\_v1.pdf
Another option is to read through the operating system’s documentation or introduction guides. Though the documentation can be overwhelming by the sheer amount of commands.
Microsoft: https://docs.microsoft.com/en-us/windows-serveadministration/windows-commands/windows-commands
MacOS
Https://www.makeuseof.com/tag/mac-terminal-commands-cheat-sheet/
Linux
https://ubuntu.com/tutorials/command-line-for-beginners#2-a-brief-history-lesson https://www.guru99.com/linux-commands-cheat-sheet.html
http://cc.iiti.ac.in/docs/linuxcommands.pdf
Aside from the required knowledge there are also some things that can be helpful to know just to make the use of OpenShift a bit simpler. This consists of some general knowledge on PaaS like Dockers and Kubernetes.
Docker https://www.docker.com/
Kubernetes https://kubernetes.io/

System requirements

Minimum System requirements

The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum hardware:
Hardware requirements
Code Ready Containers requires the following system resources:
● 4 virtual CPU’s
● 9 GB of free random-access memory
● 35 GB of storage space
● Physical CPU with Hyper-V (intel) or SVM mode (AMD) this has to be enabled in the bios
Software requirements
The minimum system requirements for the Red Hat OpenShift CodeReady Containers has the following minimum operating system requirements:
Microsoft Windows
On Microsoft Windows, the Red Hat OpenShift CodeReady Containers requires the Windows 10 Pro Fall Creators Update (version 1709) or newer. CodeReady Containers does not work on earlier versions or other editions of Microsoft Windows. Microsoft Windows 10 Home Edition is not supported.
macOS
On macOS, the Red Hat OpenShift CodeReady Containers requires macOS 10.12 Sierra or newer.
Linux
On Linux, the Red Hat OpenShift CodeReady Containers is only supported on Red Hat Enterprise Linux/CentOS 7.5 or newer and on the latest two stable Fedora releases.
When using Red Hat Enterprise Linux, the machine running CodeReady Containers must be registered with the Red Hat Customer Portal.
Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.

Required additional software packages for Linux

The CodeReady Containers on Linux require the libvirt and Network Manager packages to run. Consult the following table to find the command used to install these packages for your Linux distribution:
Table 1.1 Package installation commands by distribution
Linux Distribution Installation command
Fedora Sudo dnf install NetworkManager
Red Hat Enterprise Linux/CentOS Su -c 'yum install NetworkManager'
Debian/Ubuntu Sudo apt install qemu-kvm libvirt-daemonlibvirt-daemon-system network-manage

Installation

Getting started with the installation

To install CodeReady Containers a few steps must be undertaken. Because an OpenShift account is necessary to use the application this will be the first step. An account can be made on “https://www.openshift.com/”, where you need to press login and after that select the option “Create one now”
After making an account the next step is to download the latest release of CodeReady Containers and the pulled secret on “https://cloud.redhat.com/openshift/install/crc/installer-provisioned”. Make sure to download the version corresponding to your platform and/or operating system. After downloading the right version, the contents have to be extracted from the archive to a location in your $PATH. The pulled secret should be saved because it is needed later.
The command line interface has to be opened before we can continue with the installation. For windows we will use PowerShell. All the commands we use during the installation procedure of this guide are going to be done in this command line interface unless stated otherwise. To be able to run the commands within the command line interface, use the command line interface to go to the location in your $PATH where you extracted the CodeReady zip.
If you have installed an outdated version and you wish to update, then you can delete the existing CodeReady Containers virtual machine with the $crc delete command. After deleting the container, you must replace the old crc binary with a newly downloaded binary of the latest release.
C:\Users\[username]\$PATH>crc delete 
When you have done the previous steps please confirm that the correct and up to date crc binary is in use by checking it with the $crc version command, this should provide you with the version that is currently installed.
C:\Users\[username]\$PATH>crc version 
To set up the host operating system for the CodeReady Containers virtual machine you have to run the $crc setup command. After running crc setup, crc start will create a minimal OpenShift 4 cluster in the folder where the executable is located.
C:\Users\[username]>crc setup 

Setting up CodeReady Containers

Now we need to set up the new CodeReady Containers release with the $crc setup command. This command will perform the operations necessary to run the CodeReady Containers and create the ~/.crc directory if it did not previously exist. In the process you have to supply your pulled secret, once this process is completed you have to reboot your system. When the system has restarted you can start the new CodeReady Containers virtual machine with the $crc start command. The $crc start command starts the CodeReady virtual machine and OpenShift cluster.
You cannot change the configuration of an existing CodeReady Containers virtual machine. So if you have a CodeReady Containers virtual machine and you want to make configuration changes you need to delete the virtual machine with the $crc delete command and create a new virtual machine and start that one with the configuration changes. Take note that deleting the virtual machine will also delete the data stored in the CodeReady Containers. So, to prevent data loss we recommend you save the data you wish to keep. Also keep in mind that it is not necessary to change the default configuration to start OpenShift.
C:\Users\[username]\$PATH>crc setup 
Before starting the machine, you need to keep in mind that it is not possible to make any changes to the virtual machine. For this tutorial however it is not necessary to change the configuration, if you don’t want to make any changes please continue by starting the machine with the crc start command.
C:\Users\[username]\$PATH>crc start 
\ it is possible that you will get a Nameserver error later on, if this is the case please start it with* crc start -n 1.1.1.1

Configuration

It is not is not necessary to change the default configuration and continue with this tutorial, this chapter is here for those that wish to do so and know what they are doing. However, for MacOS and Linux it is necessary to change the dns settings.

Configuring the CodeReady Containers

To start the configuration of the CodeReady Containers use the command crc config. This command allows you to configure the crc binary and the CodeReady virtual machine. The command has some requirements before it’s able to configure. This requirement is a subcommand, the available subcommands for this binary and virtual machine are:
get, this command allows you to see the values of a configurable property
set/unset, this command can be used for 2 things. To display the names of, or to set and/or unset values of several options and parameters. These parameters being:
○ Shell options
○ Shell attributes
○ Positional parameters
view, this command starts the configuration in read-only mode.
These commands need to operate on named configurable properties. To list all the available properties, you can run the command $crc config --help.
Throughout this manual we will use the $crc config command a few times to change some properties needed for the configuration.
There is also the possibility to use the crc config command to configure the behavior of the checks that’s done by the $crc start end $crc setup commands. By default, the startup checks will stop with the process if their conditions are not met. To bypass this potential issue, you can set the value of a property that starts with skip-check or warn-check to true to skip the check or warning instead of ending up with an error.
C:\Users\[username]\$PATH>crc config get C:\Users\[username]\$PATH>crc config set C:\Users\[username]\$PATH>crc config unset C:\Users\[username]\$PATH>crc config view C:\Users\[username]\$PATH>crc config --help 

Configuring the Virtual Machine

You can use the CPUs and memory properties to configure the default number of vCPU’s and amount of memory available for the virtual machine.
To increase the number of vCPU’s available to the virtual machine use the $crc config set CPUs . Keep in mind that the default number for the CPU’s is 4 and the number of vCPU’s you wish to assign must be equal or greater than the default value.
To increase the memory available to the virtual machine, use the $crc config set memory . Keep in mind that the default number for the memory is 9216 Mebibytes and the amount of memory you wish to assign must be equal or greater than the default value.
C:\Users\[username]\$PATH>crc config set CPUs  C:\Users\[username]\$PATH>crc config set memory > 

Configuring the DNS

Window / General DNS setup

There are two domain names used by the OpenShift cluster that are managed by the CodeReady Containers, these are:
crc.testing, this is the domain for the core OpenShift services.
apps-crc.testing, this is the domain used for accessing OpenShift applications that are deployed on the cluster.
Configuring the DNS settings in Windows is done by executing the crc setup. This command automatically adjusts the DNS configuration on the system. When executing crc start additional checks to verify the configuration will be executed.

macOS DNS setup

MacOS expects the following DNS configuration for the CodeReady Containers
● The CodeReady Containers creates a file that instructs the macOS to forward all DNS requests for the testing domain to the CodeReady Containers virtual machine. This file is created at /etc/resolvetesting.
● The oc binary requires the following CodeReady Containers entry to function properly, api.crc.testing adds an entry to /etc/hosts pointing at the VM IPaddress.

Linux DNS setup

CodeReady containers expect a slightly different DNS configuration. CodeReady Container expects the NetworkManager to manage networking. On Linux the NetworkManager uses dnsmasq through a configuration file, namely /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf.
To set it up properly the dnsmasq instance has to forward the requests for crc.testing and apps-crc.testing domains to “192.168.130.11”. In the /etc/NetworkManageconf.d/crc-nm-dnsmasq.conf this will look like the following:
● Server=/crc. Testing/192.168.130.11
● Server=/apps-crc. Testing/192.168.130.11

Accessing the Openshift Cluster

Accessing the Openshift web console

To gain access to the OpenShift cluster running in the CodeReady virtual machine you need to make sure that the virtual machine is running before continuing with this chapter. The OpenShift clusters can be accessed through the OpenShift web console or the client binary(oc).
First you need to execute the $crc console command, this command will open your web browser and direct a tab to the web console. After that, you need to select the htpasswd_provider option in the OpenShift web console and log in as a developer user with the output provided by the crc start command.
It is also possible to view the password for kubeadmin and developer users by running the $crc console --credentials command. While you can access the cluster through the kubeadmin and developer users, it should be noted that the kubeadmin user should only be used for administrative tasks such as user management and the developer user for creating projects or OpenShift applications and the deployment of these applications.
C:\Users\[username]\$PATH>crc console C:\Users\[username]\$PATH>crc console --credentials 

Accessing the OpenShift cluster with oc

To gain access to the OpenShift cluster with the use of the oc command you need to complete several steps.
Step 1.
Execute the $crc oc-env command to print the command needed to add the cached oc binary to your PATH:
C:\Users\[username]\$PATH>crc oc-env 
Step 2.
Execute the printed command. The output will look something like the following:
PS C:\Users\OpenShift> crc oc-env $Env:PATH = "CC:\Users\OpenShift\.crc\bin\oc;$Env:PATH" # Run this command to configure your shell: # & crc oc-env | Invoke-Expression 
This means we have to execute* the command that the output gives us, in this case that is:
C:\Users\[username]\$PATH>crc oc-env | Invoke-Expression 
\this has to be executed every time you start; a solution is to move the oc binary to the same path as the crc binary*
To test if this step went correctly execute the following command, if it returns without errors oc is set up properly
C:\Users\[username]\$PATH>.\oc 
Step 3
Now you need to login as a developer user, this can be done using the following command:
$oc login -u developer https://api.crc.testing:6443
Keep in mind that the $crc start will provide you with the password that is needed to login with the developer user.
C:\Users\[username]\$PATH>oc login -u developer https://api.crc.testing:6443 
Step 4
The oc can now be used to interact with your OpenShift cluster. If you for instance want to verify if the OpenShift cluster Operators are available, you can execute the command
$oc get co 
Keep in mind that by default the CodeReady Containers disables the functions provided by the commands $machine-config and $monitoringOperators.
C:\Users\[username]\$PATH>oc get co 

Demonstration

Now that you are able to access the cluster, we will take you on a tour through some of the possibilities within OpenShift Container Platform.
We will start by creating a project. Within this project we will import an image, and with this image we are going to build an application. After building the application we will explain how upscaling and downscaling can be used within the created application.
As the next step we will show the user how to make changes in the network route. We also show how monitoring can be used within the platform, however within the current version of CodeReady Containers this has been disabled.
Lastly, we will show the user how to use user management within the platform.

Creating a project

To be able to create a project within the console you have to login on the cluster. If you have not yet done this, this can be done by running the command crc console in the command line and logging in with the login data from before.
When you are logged in as admin, switch to Developer. If you're logged in as a developer, you don't have to switch. Switching between users can be done with the dropdown menu top left.
Now that you are properly logged in press the dropdown menu shown in the image below, from there click on create a project.
https://preview.redd.it/ytax8qocitv51.png?width=658&format=png&auto=webp&s=72d143733f545cf8731a3cca7cafa58c6507ace2
When you press the correct button, the following image will pop up. Here you can give your project a name and description. We chose to name it CodeReady with a displayname CodeReady Container.
https://preview.redd.it/vtaxadwditv51.png?width=594&format=png&auto=webp&s=e3b004bab39fb3b732d96198ed55fdd99259f210

Importing image

The Containers in OpenShift Container Platform are based on OCI or Docker formatted images. An image is a binary that contains everything needed to run a container as well as the metadata of the requirements needed for the container.
Within the OpenShift Container Platform it’s possible to obtain images in a number of ways. There is an integrated Docker registry that offers the possibility to download new images “on the fly”. In addition, OpenShift Container Platform can use third party registries such as:
- Https://hub.docker.com/
- Https://catalog.redhat.com/software/containers/search
Within this manual we are going to import an image from the Red Hat container catalog. In this example we’ll be using MediaWiki.
Search for the application in https://catalog.redhat.com/software/containers/search

https://preview.redd.it/c4mrbs0fitv51.png?width=672&format=png&auto=webp&s=f708f0542b53a9abf779be2d91d89cf09e9d2895
Navigate to “Get this image”
Follow the steps to “create a registry service account”, after that you can copy the YAML.
https://preview.redd.it/b4rrklqfitv51.png?width=1323&format=png&auto=webp&s=7a2eb14a3a1ba273b166e03e1410f06fd9ee1968
After the YAML has been copied we will go to the topology view and click on the YAML button
https://preview.redd.it/k3qzu8dgitv51.png?width=869&format=png&auto=webp&s=b1fefec67703d0a905b00765f0047fe7c6c0735b
Then we have to paste in the YAML, put in the name, namespace and your pull secret name (which you created through your registry account) and click on create.
https://preview.redd.it/iz48kltgitv51.png?width=781&format=png&auto=webp&s=4effc12e07bd294f64a326928804d9a931e4d2bd
Run the import command within powershell
$oc import-image openshift4/mediawiki --from=registry.redhat.io/openshift4/mediawiki --confirm imagestream.image.openshift.io/mediawiki imported 

Creating and managing an application

There are a few ways to create and manage applications. Within this demonstration we’ll show how to create an application from the previously imported image.

Creating the application

To create an image with the previously imported image go back to the console and topology. From here on select container image.
https://preview.redd.it/6506ea4iitv51.png?width=869&format=png&auto=webp&s=c0231d70bb16c76cd131e6b71256e93550cc8b37
For the option image you'll want to select the “image stream tag from internal registry” option. Give the application a name and then create the deployment.
https://preview.redd.it/tk72idniitv51.png?width=813&format=png&auto=webp&s=a4e662cf7b96604d84df9d04ab9b90b5436c803c
If everything went right during the creating process you should see the following, this means that the application is successfully running.
https://preview.redd.it/ovv9l85jitv51.png?width=901&format=png&auto=webp&s=f78f350207add0b8a979b6da931ff29ffa30128c

Scaling the application

In OpenShift there is a feature called autoscaling. There are two types of application scaling, namely vertical scaling, and horizontal scaling. Vertical scaling is adding only more CPU and hard disk and is no longer supported by OpenShift. Horizontal scaling is increasing the number of machines.
One of the ways to scale an application is by increasing the number of pods. This can be done by going to a pod within the view as seen in the previous step. By either pressing the up or down arrow more pods of the same application can be added. This is similar to horizontal scaling and can result in better performance when there are a lot of active users at the same time.
https://preview.redd.it/s6i1vbcrltv51.png?width=602&format=png&auto=webp&s=e62cbeeed116ba8c55704d61a990fc0d8f3cfaa1
In the picture above we see the number of nodes and pods and how many resources those nodes and pods are using. This is something to keep in mind if you want to scale up your application, the more you scale it up, the more resources it will take up.

https://preview.redd.it/quh037wmitv51.png?width=194&format=png&auto=webp&s=5e326647b223f3918c259b1602afa1b5fbbeea94

Network

Since OpenShift Container platform is built on Kubernetes it might be interesting to know some theory about its networking. Kubernetes, on which the OpenShift Container platform is built, ensures that the Pods within OpenShift can communicate with each other via the network and assigns them their own IP address. This makes all containers within the Pod behave as if they were on the same host. By giving each pod its own IP address, pods can be treated as physical hosts or virtual machines in terms of port mapping, networking, naming, service discovery, load balancing, application configuration and migration. To run multiple services such as front-end and back-end services, OpenShift Container Platform has a built-in DNS.
One of the changes that can be made to the networking of a Pod is the Route. We’ll show you how this can be done in this demonstration.
The Route is not the only thing that can be changed and or configured. Two other options that might be interesting but will not be demonstrated in this manual are:
- Ingress controller, Within OpenShift it is possible to set your own certificate. A user must have a certificate / key pair in PEM-encoded files, with the certificate signed by a trusted authority.
- Network policies, by default all pods in a project are accessible from other pods and network locations. To isolate one or more pods in a project, it is possible to create Network Policy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete Network Policy objects within their own project.
There is a search function within the Container Platform. We’ll use this to search for the network routes and show how to add a new route.
https://preview.redd.it/8jkyhk8pitv51.png?width=769&format=png&auto=webp&s=9a8762df5bbae3d8a7c92db96b8cb70605a3d6da
You can add items that you use a lot to the navigation
https://preview.redd.it/t32sownqitv51.png?width=1598&format=png&auto=webp&s=6aab6f17bc9f871c591173493722eeae585a9232
For this example, we will add Routes to navigation.
https://preview.redd.it/pm3j7ljritv51.png?width=291&format=png&auto=webp&s=bc6fbda061afdd0780bbc72555d809b84a130b5b
Now that we’ve added Routes to the navigation, we can start the creation of the Route by clicking on “Create route”.
https://preview.redd.it/5lgecq0titv51.png?width=1603&format=png&auto=webp&s=d548789daaa6a8c7312a419393795b52da0e9f75
Fill in the name, select the service and the target port from the drop-down menu and click on Create.
https://preview.redd.it/qczgjc2uitv51.png?width=778&format=png&auto=webp&s=563f73f0dc548e3b5b2319ca97339e8f7b06c9d6
As you can see, we’ve successfully added the new route to our application.
https://preview.redd.it/gxfanp2vitv51.png?width=1588&format=png&auto=webp&s=1aae813d7ad0025f91013d884fcf62c5e7d109f1
Storage
OpenShift makes use of Persistent Storage, this type of storage uses persistent volume claims(PVC). PVC’s allow the developer to make persistent volumes without needing any knowledge about the underlying infrastructure.
Within this storage there are a few configuration options:
It is however important to know how to manually reclaim the persistent volumes, since if you delete PV the associated data will not be automatically deleted with it and therefore you cannot reassign the storage to another PV yet.
To manually reclaim the PV, you need to follow the following steps:
Step 1: Delete the PV, this can be done by executing the following command
$oc delete  
Step 2: Now you need to clean up the data on the associated storage asset
Step 3: Now you can delete the associated storage asset or if you with to reuse the same storage asset you can now create a PV with the storage asset definition.
It is also possible to directly change the reclaim policy within OpenShift, to do this you would need to follow the following steps:
Step 1: Get a list of the PVs in your cluster
$oc get pv 
This will give you a list of all the PV’s in your cluster and will display their following attributes: Name, Capacity, Accesmodes, Reclaimpolicy, Statusclaim, Storageclass, Reason and Age.
Step 2: Now choose the PV you wish to change and execute one of the following command’s, depending on your preferred policy:
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' 
In this example the reclaim policy will be changed to Retain.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Recycle"}}' 
In this example the reclaim policy will be changed to Recycle.
$oc patch pv  -p '{"spec":{"persistentVolumeReclaimPolicy":"Delete"}}' 
In this example the reclaim policy will be changed to Delete.

Step 3: After this you can check the PV to verify the change by executing this command again:
$oc get pv 

Monitoring

Within Red Hat OpenShift there is the possibility to monitor the data that has been created by your containers, applications, and pods. To do so, click on the menu option in the top left corner. Check if you are logged in as Developer and click on “Monitoring”. Normally this function is not activated within the CodeReady containers, because it uses a lot of resources (Ram and CPU) to run.
https://preview.redd.it/an0wvn6zitv51.png?width=228&format=png&auto=webp&s=51abf8cc31bd763deb457d49514f99ee81d610ec
Once you have activated “Monitoring” you can change the “Time Range” and “Refresh Interval” in the top right corner of your screen. This will change the monitoring data on your screen.
https://preview.redd.it/e0yvzsh1jtv51.png?width=493&format=png&auto=webp&s=b2c563635cfa60ea7ce2f9c146aa994df6aa1c34
Within this function you can also monitor “Events”. These events are records of important information and are useful for monitoring and troubleshooting within the OpenShift Container Platform.
https://preview.redd.it/l90vkmp3jtv51.png?width=602&format=png&auto=webp&s=4e97f14bedaec7ededcdcda96e7823f77ced24c2

User management

According to the documentation of OpenShift is a user, an entity that interacts with the OpenShift Container Platform API. These can be a developer for developing applications or an administrator for managing the cluster. Users can be assigned to groups, which set the permissions applied to all the group’s members. For example, you can give API access to a group, which gives all members of the group API access.
There are multiple ways to create a user depending on the configured identity provider. The DenyAll identity provider is the default within OpenShift Container Platform. This default denies access for all the usernames and passwords.
First, we’re going to create a new user, the way this is done depends on the identity provider, this depends on the mapping method used as part of the identity provider configuration.
for more information on what mapping methods are and how they function:
https://docs.openshift.com/enterprise/3.1/install_config/configuring_authentication.html
With the default mapping method, the steps will be as following
$oc create user  
Next up, we’ll create an OpenShift Container Platform Identity. Use the name of the identity provider and the name that uniquely represents this identity in the scope of the identity provider:
$oc create identity : 
The is the name of the identity provider in the master configuration. For example, the following commands create an Identity with identity provider ldap_provider and the identity provider username mediawiki_s.
$oc create identity ldap_provider:mediawiki_s 
Create a useidentity mapping for the created user and identity:
$oc create useridentitymapping :  
For example, the following command maps the identity to the user:
$oc create useridentitymapping ldap_provider:mediawiki_s mediawiki 
Now were going to assign a role to this new user, this can be done by executing the following command:
$oc create clusterrolebinding  \ --clusterrole= --user= 
There is a --clusterrole option that can be used to give the user a specific role, like a cluster user with admin privileges. The cluster admin has access to all files and is able to manage the access level of other users.
Below is an example of the admin clusterrole command:
$oc create clusterrolebinding registry-controller \ --clusterrole=cluster-admin --user=admin 

What did you achieve?

If you followed all the steps within this manual you now should have a functioning Mediawiki Application running on your own CodeReady Containers. During the installation of this application on CodeReady Containers you have learned how to do the following things:
● Installing the CodeReady Containers
● Updating OpenShift
● Configuring a CodeReady Container
● Configuring the DNS
● Accessing the OpenShift cluster
● Deploying an application
● Creating new users
With these skills you’ll be able to set up your own Container Platform environment and host applications of your choosing.

Troubleshooting

Nameserver
There is the possibility that your CodeReady container can't connect to the internet due to a Nameserver error. When this is encountered a working fix for us was to stop the machine and then start the CRC machine with the following command:
C:\Users\[username]\$PATH>crc start -n 1.1.1.1 
Hyper-V admin
Should you run into a problem with Hyper-V it might be because your user is not an admin and therefore can’t access the Hyper-V admin user group.
  1. Click Start > Control Panel > Administration Tools > Computer Management. The Computer Management window opens.
  2. Click System Tools > Local Users and Groups > Groups. The list of groups opens.
  3. Double-click the Hyper-V Administrators group. The Hyper-V Administrators Properties window opens.
  4. Click Add. The Select Users or Groups window opens.
  5. In the Enter the object names to select field, enter the user account name to whom you want to assign permissions, and then click OK.
  6. Click Apply, and then click OK.

Terms and definitions

These terms and definitions will be expanded upon, below you can see an example of how this is going to look like together with a few terms that will require definitions.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. Openshift is based on Kubernetes.
Clusters are a collection of multiple nodes which communicate with each other to perform a set of operations.
Containers are the basic units of OpenShift applications. These container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources.
CodeReady Container is a minimal, preconfigured cluster that is used for development and testing purposes.
CodeReady Workspaces uses Kubernetes and containers to provide any member of the development or IT team with a consistent, secure, and zero-configuration development environment.

Sources

  1. https://www.ibm.com/support/knowledgecenteen/SSMKFH/com.ibm.apmaas.doc/install/hyperv_config_add_nonadmin_user_hyperv_usergroup.html
  2. https://access.redhat.com/documentation/en-us/openshift_container_platform/4.5/
  3. https://docs.openshift.com/container-platform/3.11/admin_guide/manage_users.html
submitted by Groep6HHS to openshift [link] [comments]

Need help setting up React Native on Linux

Hey, I am beginner.
I have Xubuntu installed and tried to follow this tutorial to attempt to set up my react native environment.
When I got to the end and ran
react-native init  
I got this
[email protected]:~$ react-native init Proejctt This will walk you through creating a new React Native project in /home/useProejctt Installing react-native... Consider installing yarn to make this faster: https://yarnpkg.com npm WARN deprecated @hapi/[email protected]: Switch to 'npm install joi' npm WARN deprecated @hapi/[email protected]: This version has been deprecated and is no longer supported or maintained npm WARN deprecated @hapi/[email protected]: Moved to 'npm install @sideway/address' npm WARN deprecated @hapi/[email protected]: This version has been deprecated and is no longer supported or maintained npm WARN deprecated @hapi/[email protected]: This version has been deprecated and is no longer supported or maintained npm WARN deprecated [email protected]: [email protected]<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of [email protected] npm WARN deprecated [email protected]: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2. npm WARN deprecated [email protected]: https://github.com/lydell/resolve-url#deprecated npm WARN deprecated [email protected]: Please see https://github.com/lydell/urix#deprecated > [email protected] postinstall /home/useProejctt/node_modules/core-js > node -e "try{require('./postinstall')}catch(e){}" Thank you for using core-js ( https://github.com/zloirock/core-js ) for polyfilling JavaScript standard library! The project needs your help! Please consider supporting of core-js on Open Collective or Patreon: > https://opencollective.com/core-js > https://www.patreon.com/zloirock Also, the author of core-js ( https://github.com/zloirock ) is looking for a good job -) npm notice created a lockfile as package-lock.json. You should commit this file. npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^1.2.7 (node_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN [email protected] requires a peer of [email protected] but none is installed. You must install peer dependencies yourself. npm WARN [email protected] requires a peer of [email protected]^17.0.0 but none is installed. You must install peer dependencies yourself. + [email protected] added 740 packages from 411 contributors and audited 741 packages in 76.797s 18 packages are looking for funding run `npm fund` for details found 3 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details info Setting up new React Native app in /home/useProejctt info Adding required dependencies npm WARN [email protected] requires a peer of [email protected]^17.0.0 but none is installed. You must install peer dependencies yourself. npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) + [email protected] added 1 package and audited 744 packages in 12.559s 18 packages are looking for funding run `npm fund` for details found 3 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details ╭────────────────────────────────────────────────────────────────╮ │ │ │ New patch version of npm available! 6.14.4 → 6.14.8 │ │ Changelog: https://github.com/npm/cli/releases/tag/v6.14.8 │ │ Run npm install -g npm to update! │ │ │ ╰────────────────────────────────────────────────────────────────╯ info Adding required dev dependencies npm WARN deprecated [email protected]: request-promise-native has been deprecated because it extends the now deprecated request package, see https://github.com/request/request/issues/3142 npm WARN deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142 npm WARN deprecated [email protected]: this library is no longer supported > [email protected] postinstall /home/useProejctt/node_modules/core-js-pure > node -e "try{require('./postinstall')}catch(e){}" npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/@jest/transform/node_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/@jest/reporters/node_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/@jest/test-sequencenode_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/jest-runnenode_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/jest-runtime/node_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/@jest/core/node_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN [email protected] requires a peer of [email protected]^17.0.0 but none is installed. You must install peer dependencies yourself. npm WARN [email protected] requires a peer of [email protected]>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta but none is installed. You must install peer dependencies yourself. npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) + @babel/[email protected] + @babel/[email protected] + [email protected] + [email protected] + [email protected] + [email protected] + @react-native-community/[email protected] + [email protected] added 569 packages from 303 contributors, updated 3 packages and audited 1319 packages in 62.244s 51 packages are looking for funding run `npm fund` for details found 3 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details Run instructions for iOS: • cd "/home/useProejctt" && npx react-native run-ios - or - • Open Proejctt/ios/Proejctt.xcodeproj in Xcode or run "xed -b ios" • Hit the Run button Run instructions for Android: • Have an Android emulator running (quickest way to get started), or a device connected. • cd "/home/useProejctt" && npx react-native run-android Run instructions for Windows and macOS: • See https://aka.ms/ReactNative for the latest up-to-date instructions. 
Sorry for the block of code but I really don't know where to go from here and any help would be greatly appreciated! :)
submitted by 94rG6WdXTcuvVz to reactnative [link] [comments]

Need help setting up React Native on Linux

Hey, I am beginner.
I have Xubuntu installed and tried to follow this tutorial to attempt to set up my react native environment.
When I got to the end and ran
react-native init  
I got this
[email protected]:~$ react-native init Proejctt This will walk you through creating a new React Native project in /home/useProejctt Installing react-native... Consider installing yarn to make this faster: https://yarnpkg.com npm WARN deprecated @hapi/[email protected]: Switch to 'npm install joi' npm WARN deprecated @hapi/[email protected]: This version has been deprecated and is no longer supported or maintained npm WARN deprecated @hapi/[email protected]: Moved to 'npm install @sideway/address' npm WARN deprecated @hapi/[email protected]: This version has been deprecated and is no longer supported or maintained npm WARN deprecated @hapi/[email protected]: This version has been deprecated and is no longer supported or maintained npm WARN deprecated [email protected]: [email protected]<3 is no longer maintained and not recommended for usage due to the number of issues. Please, upgrade your dependencies to the actual version of [email protected] npm WARN deprecated [email protected]: fsevents 1 will break on node v14+ and could be using insecure binaries. Upgrade to fsevents 2. npm WARN deprecated [email protected]: https://github.com/lydell/resolve-url#deprecated npm WARN deprecated [email protected]: Please see https://github.com/lydell/urix#deprecated > [email protected] postinstall /home/useProejctt/node_modules/core-js > node -e "try{require('./postinstall')}catch(e){}" Thank you for using core-js ( https://github.com/zloirock/core-js ) for polyfilling JavaScript standard library! The project needs your help! Please consider supporting of core-js on Open Collective or Patreon: > https://opencollective.com/core-js > https://www.patreon.com/zloirock Also, the author of core-js ( https://github.com/zloirock ) is looking for a good job -) npm notice created a lockfile as package-lock.json. You should commit this file. npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^1.2.7 (node_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN [email protected] requires a peer of [email protected] but none is installed. You must install peer dependencies yourself. npm WARN [email protected] requires a peer of [email protected]^17.0.0 but none is installed. You must install peer dependencies yourself. + [email protected] added 740 packages from 411 contributors and audited 741 packages in 76.797s 18 packages are looking for funding run `npm fund` for details found 3 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details info Setting up new React Native app in /home/useProejctt info Adding required dependencies npm WARN [email protected] requires a peer of [email protected]^17.0.0 but none is installed. You must install peer dependencies yourself. npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) + [email protected] added 1 package and audited 744 packages in 12.559s 18 packages are looking for funding run `npm fund` for details found 3 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details ╭────────────────────────────────────────────────────────────────╮ │ │ │ New patch version of npm available! 6.14.4 → 6.14.8 │ │ Changelog: https://github.com/npm/cli/releases/tag/v6.14.8 │ │ Run npm install -g npm to update! │ │ │ ╰────────────────────────────────────────────────────────────────╯ info Adding required dev dependencies npm WARN deprecated [email protected]: request-promise-native has been deprecated because it extends the now deprecated request package, see https://github.com/request/request/issues/3142 npm WARN deprecated [email protected]: request has been deprecated, see https://github.com/request/request/issues/3142 npm WARN deprecated [email protected]: this library is no longer supported > [email protected] postinstall /home/useProejctt/node_modules/core-js-pure > node -e "try{require('./postinstall')}catch(e){}" npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/@jest/transform/node_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/@jest/reporters/node_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/@jest/test-sequencenode_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/jest-runnenode_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/jest-runtime/node_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected]^2.1.2 (node_modules/@jest/core/node_modules/jest-haste-map/node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) npm WARN [email protected] requires a peer of [email protected]^17.0.0 but none is installed. You must install peer dependencies yourself. npm WARN [email protected] requires a peer of [email protected]>=2.8.0 || >= 3.2.0-dev || >= 3.3.0-dev || >= 3.4.0-dev || >= 3.5.0-dev || >= 3.6.0-dev || >= 3.6.0-beta || >= 3.7.0-dev || >= 3.7.0-beta but none is installed. You must install peer dependencies yourself. npm WARN optional SKIPPING OPTIONAL DEPENDENCY: [email protected] (node_modules/fsevents): npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"}) + @babel/[email protected] + @babel/[email protected] + [email protected] + [email protected] + [email protected] + [email protected] + @react-native-community/e[email protected] + [email protected] added 569 packages from 303 contributors, updated 3 packages and audited 1319 packages in 62.244s 51 packages are looking for funding run `npm fund` for details found 3 low severity vulnerabilities run `npm audit fix` to fix them, or `npm audit` for details Run instructions for iOS: • cd "/home/useProejctt" && npx react-native run-ios - or - • Open Proejctt/ios/Proejctt.xcodeproj in Xcode or run "xed -b ios" • Hit the Run button Run instructions for Android: • Have an Android emulator running (quickest way to get started), or a device connected. • cd "/home/useProejctt" && npx react-native run-android Run instructions for Windows and macOS: • See https://aka.ms/ReactNative for the latest up-to-date instructions. 
Sorry for the block of code but I really don't know where to go from here and any help would be greatly appreciated! :)
submitted by 94rG6WdXTcuvVz to learnprogramming [link] [comments]

Non-linear stretch and update on the open-source gyro-assisted video stabilization project

Hey everyone. You might remember this post from around a month ago about developing a program that can use Betaflight blackbox data for stabilizing FPV footage similar to Reelsteady. Here's an update on the project.
TL;DR: Stabilization doesn't work yet but I'm slowly getting there. The code so far can be found on this Github Repo, which currently contains working code for a camera calibration utility and a utility for non-linear stretching (superview-like) of 4:3 video to 16:9. I've made a binary available for download with the current features.
So yeah… In the last post I used some low-resolution DVR footage for doing a proof of concept using a quick and dirty Blender project. In that post I naively thought that synchronizing gyro data with the footage wouldn't be too difficult. After getting access to some test footage (thanks to agent_d00nut, kyleli, and muteFPV!), and finding some example GoPro footage online, I've come to realize that perfect synchronization is absolutely critical. An offset of a couple of frames can exaggerate camera shake instead of removing it (this wasn't noticeable in the DVR). Moreover, logging rates aren't perfectly accurate, so matching a single point between footage and gyro data isn't enough. Doing this manually is extremely tedious.
This example took around an hour of tweaking to get a decent synchronization. The way forward is thus automatic synchronization. It also turns out that lens correction and calibration is required for proper and seamless stabilization. For instance, the result of wide-angle lens tilted 10 degrees will look nothing a zoom lens tilted the same amount. This also explains the wobbling in the example video.
During my research about video stabilization methods I found his paper by Karpenko et al. detailing a method of doing video stabilization and rolling shutter correction using gyroscopes with some very concise MATLAB code. While not exactly a step by step tutorial for beginners as I had jokingly hoped, I was still able to gather the overall ideas. The paper served as a springboard for further research, which lead me to learning and reading about quaternions, SLERP, camera matrices, image projections etc.
Instead of Blender, I have moved over to using OpenCV and Python for image processing, and Pyside2 for UI. OpenCV is used for computer vision and contains a bunch of features for video analysis and image processing. I'm just using Python since that's what I'm most familiar with. Hopefully, performance isn't too big of an issue since both OpenCV and Numpy are designed to be fast.
Here's what I've worked on since the last post:
All of this can be found in the Github repository. Feel free to take a peek, I've tried to keep the code nice and readable.
The plan moving forward is to focus on developing a working stabilization program for GoPro footage with embedded gyro metadata. This is something we know can work from reelsteady. Afterwards, support for blackbox log can be implemented, which should in theory just be a coordinate transformation to account for camera orientation and uptilt from what I can tell. If only the former works, at least there will be a (probably inferior) open source alternative to Reelsteady :).
My current plan for the gyro synchronization is to use the included optical flow functions in OpenCV to estimate camera movement, and "slide" the gyro data around until the difference is minimized, similar to the way Karpenko et al. does it. Doing this at two separate points in the video should be enough to compute clock offsets and get the best fit.
When the synchronization works, a low pass filter can be used to create a smooth virtual camera. A perspective transformation of the image plane can then be used to match the difference between the real and virtual camera. The image plane will essentially be "rotated" to match how it would look from the virtual camera, if that makes sense (more detail in the paper linked above for anyone interested). This virtual rotation only works with undistorted footage, hence the need for lens calibration. Another thing which may or may not be an issue is rolling shutter correction, but that'll have to wait until the other stuff works.
Some people asked for it last time, so I added a donate button on the Github page. You can throw a couple of bucks my way if you want (maybe that'll justify me getting a action camera myself for testing :) ), but be warned: While I'm optimistic, it's still not 100% certain that it will work with blackbox data as envisioned. GoPro metadata will probably work as mentioned before. Also, I've been kinda busy preparing for moving out/starting at uni next month, but I'll try to work on the project whenever I'm not procrastinating.
submitted by EC171 to Multicopter [link] [comments]

TA Players - Favorite play styles? Styles you wish existed? (novices encouraged!)

Update: nerfing autos to about 55% was too extreme, and I think makes heavies too strong. Next I'm going to experiment with reducing the auto hitbox with a more slight damage nerf. I didn't realize that in GOTY it's 5x the hitbox size of spin, and OOTB they are more comparable.
In terms of inheritance... it feels weird with everything at 100%. Even if that's the most physically accurate and arguably most intuitive for new players, it's a lot to ask a player base that has spent hundreds/thousands of hours perfecting aim at 50%. I think I'll just try to add more impact weapons with 50% and 100% inheritance variants and leave belt/chain alone.
I'm also thinking about custom classes - my first idea is to try a new type of light chaser where they get a light fusion mortar as a primary (less damage and a little less radius than the heavy, but still enough to one shot a capper if done perfectly)
Any other class ideas?

Hi everyone! I'm looking at doing a GOTY (game of the year - when the game had all 9 classes) mod, with the work Griffon has done for the servers and Mcoot has done for TAMods! (Thanks both!)

This is a long post about my opinionated attempt to bring my platonic form of the game back! Please hit me with the constructive feedback, and any nostalgia about moments or things you like about TA (recent or old, both are helpful). Or even things you wish were viable in TA! I can use this feedback in the tweaks I make for the future. I want to emphasize brining players back to the game. I get that balance is really hard and that this won't make everyone happy, but it's an experiment.
Overall Changes:
This game is modded from GOTY. I plan to keep some changes from OOTB.
These are essentially the following. (not finished yet)
Keeping the third weapon slot for utility - ELF Projector, Repair tool, or shock lance (with light, medium, heavy variants?). Hopefully this doesn't change class balance too much, but does allow for a little more variety.
Some rework of perk system - it feels like there should be 1, more significant perk, than picking out of 2, when some classes should have things by default (like heavies being able to kill players that run into them too fast, as in OOTB)

Tweaks to auto
Personally, I feel the game is at its best with higher speed chases and impact weapons, but I still like _some_ autos and sniper rifles. As such, I currently have substantially nerfed autos - they do a little more than 50% regular damage (more extreme than necessary for now, but good for play testing to see how the meta changes), and made everything (including belt items) 100% inheritance. This means that all of your shots use all of your momentum. In the past, Inheritance was different for different weapons. Ideally, this reduces confusion and increases consistency, but I need to test more and see how people feel after getting used to it. I'm tossing around the idea of having bullet drop - meaning autos have a trajectory kind of like the bolt launcher (mentioned in a recent post by Gierling). I would like more opinions about this! If people are interested in play testing, we can schedule some times.

Tweaks to hit scan
Hitscan pistols are removed from the game. I don't feel that they improve gameplay experience.
Shotguns are nerfed, but not as much as auto, because I didn't feel they were good enough to be used very much.
(see sentry section for changes to rifles)

Tweaks to impact and belt items
none so far, besides 100% inheritance. I'm thinking of changing the number of belt items in some situations.

Class Changes:
Pathfinder: I want it to have rage and egocentric by default. I also like how fragile it is, that it can be 1-shot midair-ed by mediums and heavies.
Sentinel: I don't want to feel guilty for playing this class. As such, I've repositioned it as harder to use, and substantially worse for skirmishing and beginners (although hopefully still vital for CTF). This one is tricky, because this class is so strong with a good player. I've removed BXT1 and BXT1A and decreased some damage on SAP20 and phase. I want SEN to usually lose battles with players that come to harass, not be able to wreck full health cappers with a couple shots, and reposition the role for stopping enemy regen, and finishing blows. That being said, I want headshots to do a lot more damage, so that crazy skilled players and still do powerful things (not sure how to do this yet). Maybe offer a light fusion mortar as an alternative primary weapon?

Infiltrator: No substantial changes yet besides the auto nerf. Should have close combat by default. Not sure if I can allow this class to have a shock lance that doesn't take the secondary weapon slot. Maybe a fun option for repair tool that does decent damage to enemy base assets?

Soldier: No substantial changes yet. Plan to make spare spinfusor stronger

Technician: Currently needs a better secondary weapon and some things to make it unique. Maybe always pilot perk, and it can have a longer range repair tool and better ELF projector?

Raider: No big changes, but I have to think more about NJ5-B and Plasma

Heavies: I haven't played them as much, so I would like help with these! I think it would be cool if doom bringer titan launch was a bit longer range and could be "click to explode". If there's a way to bring back saber launcher, we should do it for the n00bs.

Bringing Players Back:
Now all of the things I've talked about above, are relatively easy changes. The hard part is getting players. I think this section (from my perspective) is devoted to the game, rather than this mod. If people hate every change I've proposed, let's look at this section as separate. TA is a great game, and it would be great to have a little bit more of a dedicated community to it.
That being said, there are a few tricky and confusing things to navigate around. Some of the community at its current state, is horrifyingly toxic. I've had some games with hate speech and bullying, old players returning who have been targeted by insta-ELF projectoring bots, and even doxxing, because hackers seem to have compromised Hirez login servers. This is why Griffon's server setup is so, so important. It's very likely that you are at risk when you login to Hirez TA servers.

So the way I see it, we have a few big problems to solve.
  1. get players en masse to the community servers. I think they will grow if the overall expectation is "there's probably at least a half full game" than "I bet no one is on"
  2. The game is probably totally inaccessible to new players at this point. Play styles might be gone in OOTB, and the current players now are really good in comparison to the typical beginner. This can be disheartening.
  3. Setting up and joining community servers is harder than it could be.
    1. Getting your account verified through taserverbot works for some people, never does for others, sometimes fails the first time, etc. While I don't have a good technical solution at the moment, I think this can be improved and would appreciate work on this.
    2. Fast, gif tutorial to set up TAMods and community should be a thing. (I will probably do this myself unless someone else gets to it first).
    3. TAMods sometimes crashes on older machines, or fails to work at all for some people. I don't know if this is because they aren't installing the right C++ binaries, or a programming error. I do know that one person had it work for a while, but then stop working permanently, even with fresh installs of Tribes and TAMods. (I love TAMods - just saying if there are ways to make it slightly more consistent, that would be huge)

My proposed solutions:
  1. Advertise to reddit and discords about scheduled play times for the new mod I'm working on (or we can pick regular GOTY, OOTB, or something else - it feels like something new and exciting to announce is important though). I don't plan on having the server up 24/7, I plan on having it up at times when people will know that people will be in it, and going from there. This seems to be a model that works well for the mixer crew.
  2. Adding in a n00b mode that players can be put into on a server - for each class, they can just get a little more health, energy, hitbox size on weapons, etc. Not sure how hard this would be to implement, or if this would benefit the game at all.
  3. Adding the GIF tutorial as discussed, and asking for help from the community! :D
submitted by kaagg to Tribes [link] [comments]

An introduction to Linux through Windows Subsystem for Linux

I'm working as an Undergraduate Learning Assistant and wrote this guide to help out students who were in the same boat I was in when I first took my university's intro to computer science course. It provides an overview of how to get started using Linux, guides you through setting up Windows Subsystem for Linux to run smoothly on Windows 10, and provides a very basic introduction to Linux. Students seemed to dig it, so I figured it'd help some people in here as well. I've never posted here before, so apologies if I'm unknowingly violating subreddit rules.

Getting Windows Subsystem for Linux running smoothly on Windows 10

GitHub Pages link

Introduction and motivation

tl;dr skip to next section
So you're thinking of installing a Linux distribution, and are unsure where to start. Or you're an unfortunate soul using Windows 10 in CPSC 201. Either way, this guide is for you. In this section I'll give a very basic intro to some of options you've got at your disposal, and explain why I chose Windows Subsystem for Linux among them. All of these have plenty of documentation online so Google if in doubt.

Setting up WSL

So if you've read this far I've convinced you to use WSL. Let's get started with setting it up. The very basics are outlined in Microsoft's guide here, I'll be covering what they talk about and diving into some other stuff.

1. Installing WSL

Press the Windows key (henceforth Winkey) and type in PowerShell. Right-click the icon and select run as administrator. Next, paste in this command:
dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart 
Now you'll want to perform a hard shutdown on your computer. This can become unecessarily complicated because of Window's fast startup feature, but here we go. First try pressing the Winkey, clicking on the power icon, and selecting Shut Down while holding down the shift key. Let go of the shift key and the mouse, and let it shutdown. Great! Now open up Command Prompt and type in
wsl --help 
If you get a large text output, WSL has been successfully enabled on your machine. If nothing happens, your computer failed at performing a hard shutdown, in which case you can try the age-old technique of just holding down your computer's power button until the computer turns itself off. Make sure you don't have any unsaved documents open when you do this.

2. Installing Ubuntu

Great! Now that you've got WSL installed, let's download a Linux distro. Press the Winkey and type in Microsoft Store. Now use the store's search icon and type in Ubuntu. Ubuntu is a Debian-based Linux distribution, and seems to have the best integration with WSL, so that's what we'll be going for. If you want to be quirky, here are some other options. Once you type in Ubuntu three options should pop up: Ubuntu, Ubuntu 20.04 LTS, and Ubuntu 18.04 LTS.
![Windows Store](https://theshepord.github.io/intro-to-WSL/docs/images/winstore.png) Installing plain-old "Ubuntu" will mean the app updates whenever a new major Ubuntu distribution is released. The current version (as of 09/02/2020) is Ubuntu 20.04.1 LTS. The other two are older distributions of Ubuntu. For most use-cases, i.e. unless you're running some software that will break when upgrading, you'll want to pick the regular Ubuntu option. That's what I did.
Once that's done installing, again hit Winkey and open up Ubuntu. A console window should open up, asking you to wait a minute or two for files to de-compress and be stored on your PC. All future launches should take less than a second. It'll then prompt you to create a username and password. I'd recommend sticking to whatever your Windows username and password is so that you don't have to juggle around two different usepassword combinations, but up to you.
Finally, to upgrade all your packages, type in
sudo apt-get update 
And then
sudo apt-get upgrade 
apt-get is the Ubuntu package manager, this is what you'll be using to install additional programs on WSL.

3. Making things nice and crispy: an introduction to UNIX-based filesystems

tl;dr skip to the next section
The two above steps are technically all you need for running WSL on your system. However, you may notice that whenever you open up the Ubuntu app your current folder seems to be completely random. If you type in pwd (for Present Working Directory, 'directory' is synonymous with 'folder') inside Ubuntu and hit enter, you'll likely get some output akin to /home/. Where is this folder? Is it my home folder? Type in ls (for LiSt) to see what files are in this folder. Probably you won't get any output, because surprise surprise this folder is not your Windows home folder and is in fact empty (okay it's actually not empty, which we'll see in a bit. If you type in ls -a, a for All, you'll see other files but notice they have a period in front of them, which tells bash that they should be hidden by default. Anyways).
So where is my Windows home folder? Is WSL completely separate from Windows? Nope! This is Windows Subsystem for Linux after all. Notice how, when you typed pwd earlier, the address you got was /home/. Notice that forward-slash right before home. That forward-slash indicates the root directory (not to be confused with the /root directory), which is the directory at the top of the directory hierarchy and contains all other directories in your system. So if we type ls /, you'll see what are the top-most directories in your system. Okay, great. They have a bunch of seemingly random names. Except, shocker, they aren't random. I've provided a quick run-down in Appendix A.
For now, though, we'll focus on /mnt, which stands for mount. This is where your C drive, which contains all your Windows stuff, is mounted. So if you type ls /mnt/c, you'll begin to notice some familiar folders. Type in ls /mnt/c/Users, and voilà, there's your Windows home folder. Remember this filepath, /mnt/c/Users/. When we open up Ubuntu, we don't want it tossing us in this random /home/ directory, we want our Windows home folder. Let's change that!

4. Changing your default home folder

Type in sudo vim /etc/passwd. You'll likely be prompted for your Ubuntu's password. sudo is a command that gives you root privileges in bash (akin to Windows's right-click then selecting 'Run as administrator'). vim is a command-line text-editing tool, kinda like an even crummier Notepad, which is a pain to use at first but bear with me and we can pull through. /etc/passwd is a plaintext file that does not store passwords, as the name would suggest, but rather stores essential user info used every time you open up WSL.
Anyway, once you've typed that in, your shell should look something like this: ![vim /etc/passwd](https://theshepord.github.io/intro-to-WSL/docs/images/vim-etc-passwd.png)
Using arrow-keys, find the entry that begins with your Ubuntu username. It should be towards the bottom of the file. In my case, the line looks like
theshep:x:1000:1000:,,,:/home/pizzatron3000:/bin/bash 
See that cringy, crummy /home/pizzatron3000? Not only do I regret that username to this day, it's also not where we want our home directory. Let's change that! Press i to initiate vim's -- INSERT -- mode. Use arrow-keys to navigate to that section, and delete /home/ by holding down backspace. Remember that filepath I asked you to remember? /mnt/c/Users/. Type that in. For me, the line now looks like
theshep:x:1000:1000:,,,:/mnt/c/Users/lucas:/bin/bash 
Next, press esc to exit insert mode, then type in the following:
:wq 
The : tells vim you're inputting a command, w means write, and q means quit. If you've screwed up any of the above sections, you can also type in :q! to exit vim without saving the file. Just remember to exit insert mode by pressing esc before inputting commands, else you'll instead be writing to the file.
Great! If you now open up a new terminal and type in pwd, you should be in your Window's home folder! However, things seem to be lacking their usual color...

5. Importing your configuration files into the new home directory

Your home folder contains all your Ubuntu and bash configuration files. However, since we just changed the home folder to your Window's home folder, we've lost these configuration files. Let's bring them back! These configuration files are hidden inside /home/, and they all start with a . in front of the filename. So let's copy them over into your new home directory! Type in the following:
cp -r /home//* ~ 
cp stands for CoPy, -r stands for recursive (i.e. descend into directories), the * is a Kleene Star and means "grab everything that's here", and the ~ is a quick way of writing your home directory's filepath (which would be /mnt/c/Users/) without having to type all that in again. Once you've run this, all your configuration files should now be present in your new home directory. Configuration files like .bashrc, .profile, and .bash_profile essentially provides commands that are run whenever you open a new shell. So now, if you open a new shell, everything should be working normally. Amazing. We're done!

6. Tips & tricks

Here are two handy commands you can add to your .profile file. Run vim ~/.profile, then, type these in at the top of the .profile file, one per line, using the commands we discussed previously (i to enter insert mode, esc to exit insert mode, :wq to save and quit).
alias rm='rm -i' makes it so that the rm command will always ask for confirmation when you're deleting a file. rm, for ReMove, is like a Windows delete except literally permanent and you will lose that data for good, so it's nice to have this extra safeguard. You can type rm -f to bypass. Linux can be super powerful, but with great power comes great responsibility. NEVER NEVER NEVER type in rm -rf /, this is saying 'delete literally everything and don't ask for confirmation', your computer will die. You've been warned. Be careful.
export DISPLAY=:0 if you install XLaunch VcXsrv, this line allows you to open graphical interfaces through Ubuntu. The export sets the environment variable DISPLAY, and the :0 tells Ubuntu that it should use the localhost display.

Appendix A: overview of top-level UNIX directories

tl;dr only mess with /mnt, /home, and maybe maybe /usr. Don't touch anything else.
  • bin: binaries, contains Ubuntu binary (aka executable) files that are used in bash. Here you'll find the binaries that execute commands like ls and pwd. Similar to /usbin, but bin gets loaded earlier in the booting process so it contains the most important commands.
  • boot: contains information for operating system booting. Empty in WSL, because WSL isn't an operating system.
  • dev: devices, contains information for Ubuntu to communicate with I/O devices. One useful file here is /dev/null, which is basically an information black hole that automatically deletes any data you pass it.
  • etc: no idea why it's called etc, but it contains system-wide configuration files
  • home: equivalent to Window's C:/Users folder, contains home folders for the different users. In an Ubuntu system, under /home/ you'd find the Documents folder, Downloads folder, etc.
  • lib: libraries used by the system
  • lib64 64-bit libraries used by the system
  • mnt: mount, where your drives are located
  • opt: third-party applications that don't have any dependencies outside the scope of their own package
  • proc: process information, contains details about your Linux system, kind of like Windows's C:/Windows folder
  • run: directory for programs to store runtime information. Similarly to /bin vs /usbin, run has the same function as /varun, but gets loaded sooner in the boot process.
  • srv: server folder, holds data to be served in protocols like ftp, www, cvs, and others
  • sys: system, used by the Linux kernel to set or obtain information about the host system
  • tmp: temporary, runtime files that are cleared out after every reboot. Kinda like RAM in that way.
  • usr: contains additional UNIX commands, header files for compiling C programs, among other things. Most of everything you install using apt-get ends up here.
  • var: variable, contains variable data such as logs, databases, e-mail etc, but that persist across different boots.

Appendix B: random resources

submitted by HeavenBuilder to learnprogramming [link] [comments]

Top sites to practice hacking skills (legally)

Top sites to practice hacking skills (legally)
credit- icssindia.in
These Websites to exercise your hacking skills whether you are a hacker, cybersecurity, pen-tester, or still a noob.
These vulnerable websites are great for developing our minds, increasing our capacity to solve problems, new innovative ideas come to our minds. Also, you will face brainfuck a lot of difficulties. Never give up always try to give your best. Because if you want to be a professional hacker, then you must know about the hacker attitudes and …
“real hackers never give up”
There are a lot of gaping holes in almost every security system. And to discover these is this is also a great opportunity to also discover the various tools that are needed for hacking. What the different options are etc. Use these websites to practice your hacking skills so you can be the best defense.
An attack is definitely the best form of defense
This applies to a lot of companies, they are hacking their own websites and even recruiting ethical hackers in an attempt to find vulnerabilities before the bad guys do as such ethical hacking is now a much sought after skill.

pwnable.kr

pwnable.kr is a non-commercial wargame site which provides various pwn challenges regarding system exploitation. the main purpose of pwnable.kr is ‘fun’. please consider each of the challenges as a game. while playing pwnable.kr, you could learn/improve system hacking skills but that shouldn’t be your only purpose.

pwnable.tw

Pwnable.tw is a wargame site for hackers to test and expand their binary exploiting skills.
HOW-TO
  • Try to find out the vulnerabilities that exist in the challenges, exploit the remote services to get flags.
  • The flag is usually at /home/xxx/flag, but sometimes you have to get a shell to read them.
  • Most of the challenges are running on Ubuntu 16.04/18.04 docker image.
  • You can share a write-up or exploit code in your profile, only players who also solved the same challenge are able to see them.

hack.me

Hack.me is a FREE, community-based project powered by eLearnSecurity. The community can build, host, and share vulnerable web application code for educational and research purposes. It aims to be the largest collection of “runnable” vulnerable web applications, code samples and CMS’s online. (This is more a test website. But still can improve your hacking skills a lot ..!)
The platform is available without any restriction to any party interested in Web Application Security:
  • students
  • universities
  • researchers
  • penetration testers
  • web developers

CTFlearn

CTFlearn is an ethical hacking platform that enables tens of thousands to learn, practice, and compete. The main attraction, of course, is the user-submitted Problems and Challenges which span the typical CTF theology such as Binary Exploitation, Cryptography, Reverse engineering, Forensics, Web attacks (see XSS, SQL Injection and the likes). There are also group the challenges by Popularity, level of Difficulty, and order of Appearance.

Google Gruyere

Gruyere It’s not often we see the pairing of cheese and hacking, but this website is a lot like good cheese—full of holes. It also uses a “cheesy” code and the entire design is cheese-based. Gruyere is a great option for beginners who want to dive into finding and exploiting vulnerabilities, but also learn how to play on the other side and defend against exploits.
Gruyere is written in Python, with bugs that aren’t specific to Python, and offers a substantial number of security vulnerabilities chosen to suit beginners. Some of the vulnerabilities are:
  • Cross-site scripting (XSS)
  • Cross-site request forgery (XRF)
  • Remote code execution
  • DoS attacks
  • Information disclosure
Gruyere code lab has divided vulnerabilities into different sections, and in each section, you will have a task to find that vulnerability. Using both black and white box hacking, you’ll need to find and exploit bugs.

Root Me

Root Me A multilanguage security training platform, Root Me is a great place for testing and advancing your hacking skills. It features over 300 challenges which are updated regularly and more than 50 virtual environments, all to provide a realistic environment. Root Me also has a passionate community of over 200,0000 members, all of whom are encouraged to participate in the development of the project and earn recognitions.
Different subjects covered on Root Me include:
  • Digital investigation
  • Automation
  • Breaking encryption
  • Cracking
  • Network challenges
  • SQL injection
It’s a solid platform and a great way to practice your hacking skills, although it’s not as beginner-friendly as some of the other entries on this list.

Hack The Box

Hack The Box (HTB) is an online platform allowing you to test your penetration testing skills. It contains several challenges that are constantly updated. Some of them simulating real-world scenarios and some of them leaning more towards a CTF style of challenge. It contains several challenges that are constantly updated. Some of them simulating real-world scenarios and some of them leaning more towards a CTF style of challenge. You should try this site out if you have an interest in network security or information security.
I suggest you to try to hack your way into this website.”

Hacking-Lab

Hacking-Lab is an online ethical hacking, computer network, and security challenge platform, dedicated to finding and educating cybersecurity talents. … HackingLab’s goal is to raise awareness towards increased education and ethics in information security.provides the CTF (Capture The Flag) challenges for the European Cyber Security Challenge but hosts challenges on their own platform which anyone can take part in once you have registered.
Hacking-Lab is providing CTF and mission style challenges for international competitions like the European Cyber Security Challenge, and free OWASP TOP 10 online security labs. Hacking-Lab’s goal is to raise awareness towards increased education and ethics in information security.

Game of Hacks

Game of Hacks, This game was designed to test your application hacking skills. You will be presented with vulnerable pieces of code and your mission if you choose to accept it is to find which vulnerability exists in that code as quickly as possible. In the game, developers and security professionals test their application hacking skills, improve their code security know-how, and facilitate better security practices in hope of reducing the number of vulnerabilities in their applications.
Available for desktop, tablet, and mobile, Game of Hacks presents developers with vulnerable pieces of code and challenges them to identify the application layer vulnerability as quickly as possible. It even has a two-player mode allowing head-to-head competition. Players analyze vulnerabilities including SQL injection, XSS, log forgery, path traversal, parameter tampering, and others in myriad programming languages.

OverTheWire

OverTheWire The wargames offered by the OverTheWire community can help you to learn and practice security concepts in the form of fun-filled games. To find out more about a certain wargame, just visit its page linked from the menu on the left. Suggested order to play the games in
  1. Bandit
  2. Leviathan or Natas or Krypton
  3. Narnia
  4. Behemoth
  5. Utumno
  6. Maze
Each shell game has its own SSH port Information about how to connect to each game using SSH is provided in the top left corner of the page. Keep in mind that every game uses a different SSH port.

microcorruption.com

microcorruption.com Scattered throughout the world in locked warehouses are briefcases filled with Cy Yombinator bearer bonds that could be worth billions comma billions of dollars. You will help steal the briefcases.
Cy Yombinator has cleverly protected the warehouses with Lockitall electronic lock devices. Lockitall locks are unlockable with an app. We’ve positioned operatives near each warehouse; each is waiting for you to successfully unlock the warehouse by tricking out the locks. The Lockitall devices work by accepting Bluetooth connections from the Lockitall LockIT Pro app. We’ve done the hard work for you: we spent $15,000 on a development kit that includes remote-controlled locks for you to practice on, and reverse engineered enough of it to build a primitive debugger.
Using the debugger, you’ll be able to single-step the lock code, set breakpoints, and examine memory on your own test instance of the lock. You’ll use the debugger to find an input that unlocks the test lock, and then replay it to a real lock. It should be a milk run. Good luck. We’ll see you on a beach in St Tropez once you’re done.

XSS game

XSS game Cross-site scripting (XSS) bugs are one of the most common and dangerous types of vulnerabilities in Web applications. These nasty buggers can allow your enemies to steal or modify user data in your apps and you must learn to dispatch them, pronto!
In this training program, you will learn to find and exploit XSS bugs. You’ll use this knowledge to confuse and infuriate your adversaries by preventing such bugs from happening in your applications. There will be cake at the end of the test.

HackThis!!

HackThis!! was initially designed so that the students can be taught how to hack, and teach the students about dumps and defacement. If you are an expert hacker then for you, there are 50 levels of difficulty to offer. The website comes with a great online community to help you with hacking, and it would keep you up to date with the security news.

crackmes.one

crackmes.one This is a simple place where you can download crackmes to improve your reverse engineering skills. If you like to submit a crackme or a solution to one of them, you must register. But before that, I strongly recommend you to read the FAQ

HackThisSite

Hack This Site is a free, safe, and legal training ground for hackers to test and expand their hacking skills. HackThisSite commonly referred to as HTS, is an online hacking and security website founded by Jeremy Hammond. More than just another hacker wargames site, we are a living, breathing community with many active projects in development, with a vast selection of hacking articles and a huge forum where users can discuss hacking, network security, and just about everything. Tune in to the hacker underground and get involved with the project.

Pentest training

Pentest training is a simple website used as a hub for information revolving around the varies services we offer to help both experienced and new penetration testers practice and hone their skills. We offer a fully functioning penetration testing lab that is ever-increasing in size, complexity, and diversity. The lab has a fully functioning Windows domain with various Windows OS’s. There is also a selection of Boot2Root Linux machines to practice your CTF and escalation techniques and finally, pre-built web application training machines.

Hellbound Hackers

Hellbound Hackers provides a hands-on approach to computer security. Learn how hackers break-in, and how to keep them out. Huge resource for computer security researchers. The website emphasizes being hands-on, and it offers you many challenges to make you the best hacker out there. It offers a variety of challenges to teach you how to identify potential vulnerabilities, and it also suggests the ways to patch the vulnerabilities. The website comes with an array of tutorials and a thriving community of more than 100K registered members.

HAX.TOR

hax.tor.hu is a very old site (founded in 2006). But it serves the purpose of learning the stuff. Many problems are not functioning because of technology changes. Because many problems relied on the old PHP version flaws. Also, the player gets a free shell account to use (with web/mail hosting) on a server (with gigabit bandwidth) dedicated to security folks.
A few examples of HaX.ToR challenges:
Level 1. Make a nasa.gov URL display a text of my choice Level 7. snifflog.txt – ngrep format Level 13. PHP with a source – needs exploiting and/or -t-b thinking Level 16. root:hsmfs;[email protected] Level 21. Backdoor on a suspended domain Level 26. PHP file manager with a source – needs more exploit Level 28. telnet://hax.tor.hu:1800 – Google Word Game Level 33. Defense Information Systems Agency – 209.22.99.66 Level 39. China Science And Technology Network Level 48. .htaccess editor vs basic auth Level 49. Forged DNS from the CIA

ThisIsLegal

ThisisLegal, a hacker wargames site with much more – such as forums and tutorials. The aim of the site is to help you learn and improve as much as we can and also provide a community with a chance to chat. The site is always up for suggestions for improvement and any challenge submissions or tutorial content are also welcome so please help to improve our community.
submitted by icssindia to HackingTechniques [link] [comments]

Day Trading Binary Options

Does anyone have suggestions for a beginner looking to start day trading binary options? Meaning: what market, what platform, what tutorials, etc. I’ve already began researching and am very interested.
submitted by Connorf7845 to stocks [link] [comments]

Python packaging documentation sucks.

After trying to wrap my head around publishing a simple Python package for weeks, I realized that the "official" docs on Python packaging irks me. Here's why:

Doesn't follow common practice

Python's standard library docs (e.g. json) usually consist of two parts:
  1. An introductory section with examples for simple use cases.
  2. A full API reference (with more examples, constraints, edge cases, etc.)
This format is good for beginners and experts alike. For newcomers, the introduction helps you grasp what the module does and how to use it. Want to learn more about a class? Click on its name to jump to its API reference. Experienced developers can skip the intro and dive into the API reference right away.
This two-track scheme is so effective that many popular projects also do it. Flask and requests docs are easy to read.
Not so for setup(). The docs for distutils.core.setup() is sprawled all over the place.
What is py_modules for? The API reference is extremely terse. You have to dig through the tutorial to read the full description to figure it out.
What does setup.py sdist do? The tutorial throws more details than a tutorial aught to. In contrast, the API reference is lacking.
This inversion of content is confusing, and deviates from established practices.
We could learn a thing or to from npm. See: docs on package.json.

Misleading

There's a big fat memo on top of every tutorial:
Note: This document is being retained solely until the setuptools documentation at https://setuptools.readthedocs.io/en/latest/setuptools.html independently covers all of the relevant information currently included here.
Which encourages the reader to click on the link and start reading setuptools docs. But now the reader is confused, because setuptools does NOT "independently cover all of the relevant info" yet. It discusses new and changed setup() keywords without discussing what setup() does. You have to go back to distutils docs for that. Quite obviously, there is no link leading back to distutils docs.
Did I say it's sprawled all over the place?
What needs to be done:
  1. Until setuptools docs becomes truly independent, add a link back to distutils docs saying: SORRY. NOT FOR NEWBS WHO DON'T KNOW WHAT DISTUTILS IS. READ IT FIRST BEFORE READING OUR DOCS.
  2. Make it truly independent.

Doesn't explain the little details

One thing that confounded me as a beginner: Where do setup.py commands come from? How does it parse the options? I certainly didn't parse sys.argv myself.
The answer is that distutils/setuptools does the actual work. And this goes against the common sense that you pass a command to the program that does the actual work.
Why did it have to be setup.py sdist, and not distutils sdist or setuptools sdist? All the cool kids do it the right way--npm build and go build and cargo build. In 2020, it would be worth adding this one line to the docs:
setup() turns setup.py into a command-line program that handles packaging your project†. Instead of running setuptools as a standalone program, you call it through setup.py.
Oh, wait, packging.python.org already did that. It's a good thing distutils and setuptools docs explicitly link to it. (They don't. They really should.)

packaging.python.org needs updates, too

The bulk of their useful documentation actually lives in https://packaging.python.org/guides/ . This is great, except that many of its guides have the foreboding header:
The guides aren't organized, either; it's one big list. Some taxonomy is desired:
If possible, the following excellent resources should also be added here.

TLDR?

Python Packaging might have got better. Documentation needs to get better, too.
submitted by lifeeraser to Python [link] [comments]

SQLite vs Visual Studio 2019: can't use 'entity framework'? What am I missing?

It seems that support for SQLite ended with Visual Studio 2017? Not sure wth is going on...
I've got SQLite up and going, as far as I can tell. I'm using SQLite/SQL Server Compact Toolbox enabling me to create db's and tables and populate my table, and I've got my database 'connected' (I can open and close it without errors) and I've got System.Data.SQLite in my 'References'. However, any attempt to 'select' anything in that Table results in 'no such column'. Playing around with the code I even got a 'no such Table'.
In digging around, I keep coming up with a break down in the tutorials where I can't install the 'entity framework' or 'EF'. If I 'add a new item' to my project and choose "EF Designer from data..." and I'm only presented with my sqlexpress dbo's, not my SQLite db. A New Connection has no options for SQLite.
I can't tell if it (the tutorial procedures) breaks down because I've already got it or because I'm missing something else. It seems as if I'm missing it: as if there's no translation of data commands between VS and the db. All information and links regarding SQLite and using db's seems to stop before VS 2019. Anybody have any current information? My project needs local storage, no servers. I'm about to go back to file storage...
Any help would be appreciated, thank you!

Edit: SOLVED! Thank you, everyone!!! Turned out that I had two issues, a corrupted database and incorrect usage of the SELECT command.
Procedure:
I started at this tutoial (make sure that you give it a 'like', even tho it's a bit outdated and you need to make the following changes), as installing SQLite isn't the same anymore and I don't remember how the heck I 'installed' it! I did download the binaries and run them, but I don't think that counts, lol! So if anyone has any input on that, other noobies could use it. I suspect that it is installed using the NuGet procedure listed below.
I used a Forms project instead of the WPF in the tutorial.
Installing SqLite/SQL Server Compat Toolbox doesn't work the same way, either. Download it from here. Double click and it should install into VS 2019.
So, create your Forms project, go to 'Tools/NuGet Package ManageManage NuGet Packages for Solution"
Click the 'Browse' tab and search for 'system.data.sqlite.core', click it, select your project on the right and install it.
Go back to 'Browse' and search for 'entity' and install 'EntityFramework by Microsoft' . I'm looking at the 6.4.0 version.
Using SQLite/SQL Server Compact Toolbox, create your database, create your Table and populate it. You can follow this video. It's accurate. Again, please give it a 'like'! For this example, I used the fields 'id_field', 'name_field', 'surname_field' and 'age_field'.
If you're a beginner, I recommend using a central location for your database and refer to the file with an absolute path. Some of the troubles I ran into involved forgetting to copy the db to the debug folder and then, somewhere in the copying, it got corrupted which meant that it didn't matter if I remembered or not. Until you've got your system going, "KISS". (KeepItSimpleStupid)
I believe that the 5th video in the series is obsolete and I haven't gotten further, yet. I figured that I needed working db communication first.
Assuming that you've gotten this far and you can see your db with table in the SQLite/SQL Server Compact Toolbox window, and you've got some data in it, go to the Form Design window and add a button and a listbox. For this, name the list box 'DisplayListBox'. Double click the button to create the event handler and add the following code. Assuming that your database has the same fields. Notice that the "DataSource" uses two backslashes and beware spaces(I think) and that it's in a directory of it's own so that I can refer to it in an absolute path and I don't have to worry about corruption by copying it.
In "DataSource=", change the path and filename to match your system. I called my Table 'myTable': change it in the SELECT/FROM line to match yours.
string cs = "Data Source=j:\\Databases\\SQLiteTest1.db;Version=3"; SQLiteConnection con = new SQLiteConnection(cs); DisplayListBox.Items.Clear(); con.Open(); MessageBox.Show("Open"); SQLiteCommand readCommand = new SQLiteCommand("SELECT id_field, name_field, surname_field, age_field FROM myTable", con); SQLiteDataReader reader = readCommand.ExecuteReader(); while (reader.Read()) { DisplayListBox.Items.Add(reader["id_field"].ToString() + " - " + reader["name_field"] + " - " + reader["surname_field"] + " - " + reader["age_field"]); } con.Close(); MessageBox.Show("Closed"); 
In the top, make sure that you add 'using System.Data.SQLite'.
If you've gotten this far without error notes, try running it. You should get a popup that it's open, the listbox should populate with the data you added when you created the db, and then another message that it's closed. Nice and simple.
Now to start throwing things at it! If anyone has any commentary, I'm all 'ears'!
submitted by Stridyr to csharp [link] [comments]

Subreddit Stats: youtubedl top posts from 2017-03-27 to 2020-05-18 21:35 PDT

Period: 1148.54 days
Submissions Comments
Total 790 4182
Rate (per day) 0.69 3.71
Unique Redditors 603 810
Combined Score 2434 5867

Top Submitters' Top Submissions

  1. 66 points, 15 submissions: Empyrealist
    1. YouTubeDL Material – A Self-Hosted YouTube Video Downloader (16 points, 2 comments)
    2. YouTube is reducing its default video quality to standard definition for the next month (11 points, 3 comments)
    3. Reddit videos currently download HLS streams by default, and they are corrupted (10 points, 2 comments)
    4. Version 2019.09.12.1 has been released (9 points, 8 comments)
    5. Having cookies problems because they have spaces? (Windows batch script solution) (5 points, 10 comments)
    6. Technical specs about Youtube Format IDs (5 points, 1 comment)
    7. Automating cookies.txt on Windows with Chrome (2 points, 0 comments)
    8. --embed-thumbnail appears to be breaking playlist downloads in the current version of youtube-dl (1 point, 3 comments)
    9. Fear the Walking Dead: Flight 462 (full web series) (1 point, 0 comments)
    10. Fear the Walking Dead: Passage (full web series) (1 point, 0 comments)
  2. 60 points, 8 submissions: antdude
    1. FYI since Google/YouTube just broke youtube-dl: "WARNING: Unable to extract video title" and no title filenames. · Issue #21934 · ytdl-org/youtube-dl · GitHub (20 points, 12 comments)
    2. FYI since youtube-dl is broken right now with YouTube.com: "token" parameter not in video info for unknown reason; · Issue #20758 · ytdl-org/youtube-dl · GitHub (16 points, 24 comments)
    3. Youtube-dl Tutorial With Examples For Beginners - OSTechNix (12 points, 1 comment)
    4. FYI: youtube-dl was finally updated again to 2019.06.08 from 2019.5.20. (5 points, 4 comments)
    5. Is it me or is youtube-dl slowing down from youtube.com the last couple days? (5 points, 11 comments)
    6. Is anyone else having problems playing back these official YouTube's free movies? DRM and MPC-HC can't fully play it after a few seconds. DRM related? :( (1 point, 2 comments)
    7. No updated youtube-dl's Linux/Debian binaries since 5/20/2019? (1 point, 3 comments)
    8. No recent released youtube-dl updates? (0 points, 4 comments)
  3. 42 points, 14 submissions: RedditNoobie777
    1. Why isn't the best audio .webm but .opus? (7 points, 17 comments)
    2. How to turn off ffmpeg low/hight pass filter? (6 points, 13 comments)
    3. Save description with hyperlink? (5 points, 7 comments)
    4. How to add URL as tag ? (4 points, 11 comments)
    5. webm to opus converion -x vs ffmeg? (4 points, 8 comments)
    6. How does youtube-dl not re-encode? (3 points, 4 comments)
    7. How to download .opus (not webm) directly w/o first downloading video? (3 points, 6 comments)
    8. youtube-dl downloads audio at higher bitdepth sample rate than available? (3 points, 1 comment)
    9. Command to get highest possible quality audio? (2 points, 8 comments)
    10. YTDL downloads 251 160kb/s opus in 136kbps? (2 points, 8 comments)
  4. 41 points, 1 submission: mysteriousdolphin
    1. I made a full youtube-dl download walk-through and guide for beginners! (41 points, 25 comments)
  5. 38 points, 7 submissions: KDE_Fan
    1. YT banning IP's with "HTTP ERROR 429: Too Many Requests" - I see it's common but how long does it last? (19 points, 50 comments)
    2. Problem downloading some Twitter video's b/c name is too long - getting "Errno 36" - what's best way to handle this? (10 points, 4 comments)
    3. Can batch download ignore errors and continue with download list? (3 points, 1 comment)
    4. Video/File names not being saved but video ID is... Just seemed to start happening (3 points, 5 comments)
    5. Can we download livestream's or is there a way to add the feature? (1 point, 3 comments)
    6. Comparing files between 2 directories (one local one remote) when downloading large batch file for new files - is this possible? (1 point, 1 comment)
    7. Download error "this video is unavailble" on a new computer install & can't run youtube-dl -U - any idea what might be happening? (1 point, 3 comments)
  6. 35 points, 4 submissions: RestiaAshdoll666
    1. youtube-dl : The complete installation guide for Windows (28 points, 11 comments)
    2. I made a pull request to mention Scoop and Chocolatey as installation options in Readme but it was immediately closed (4 points, 2 comments)
    3. Cant use curl with video titles having special characters (2 points, 12 comments)
    4. Rate my config (1 point, 2 comments)
  7. 32 points, 1 submission: abdouli1998
    1. Getting this ERROR "This video is unavailable" with every video I try to download (32 points, 40 comments)
  8. 28 points, 2 submissions: 404WebUserNotFound
    1. JackTheVideoRipper is a new GUI for youtube-dl on Windows 10 (19 points, 11 comments)
    2. JackTheVideoRipper v0.6.1 [RELEASED]: Download YouTube videos or audio easily with a few point and clicks. Designed for Windows 10. (9 points, 12 comments)
  9. 27 points, 2 submissions: Triple_Hache
    1. Impossible to download some videos (HTTP Error 403: Forbidden) (26 points, 30 comments)
    2. Some videos seem impossible to download (1 point, 8 comments)
  10. 26 points, 10 submissions: zackmark29
    1. Stream Detector (Easily detect .m3u8 url etc.) (8 points, 5 comments)
    2. master.m3u8 HTTP error 403 Forbidden youtube-dl or ffmpeg (5 points, 13 comments)
    3. Download Local .m3u8 files (3 points, 3 comments)
    4. Forced embed subtitle to mp4 (2 points, 5 comments)
    5. Get file size before downloading (2 points, 2 comments)
    6. No Video Format Found when downloading manifest.mpd (2 points, 4 comments)
    7. .MPD file No video formats found (1 point, 3 comments)
    8. Best command to combine TS Files (1 point, 8 comments)
    9. Can't download from viki.com when using credentials (1 point, 19 comments)
    10. Get m3u8 link easily without looking on network tab (1 point, 6 comments)

Top Commenters

  1. Empyrealist (1698 points, 1130 comments)
  2. werid (647 points, 363 comments)
  3. chemtrailz (176 points, 93 comments)
  4. qwertz19281 (115 points, 58 comments)
  5. d1ckh3ad69 (93 points, 52 comments)
  6. RestiaAshdoll666 (73 points, 55 comments)
  7. kucksdorfs (70 points, 42 comments)
  8. BlueSwordM (50 points, 31 comments)
  9. klutz50 (46 points, 40 comments)
  10. zackmark29 (38 points, 33 comments)

Top Submissions

  1. I made a full youtube-dl download walk-through and guide for beginners! by mysteriousdolphin (41 points, 25 comments)
  2. Getting this ERROR "This video is unavailable" with every video I try to download by abdouli1998 (32 points, 40 comments)
  3. youtube-dl : The complete installation guide for Windows by RestiaAshdoll666 (28 points, 11 comments)
  4. Impossible to download some videos (HTTP Error 403: Forbidden) by Triple_Hache (26 points, 30 comments)
  5. One-click download videos with youtube-dl by baobabKoodaa (26 points, 7 comments)
  6. FYI since Google/YouTube just broke youtube-dl: "WARNING: Unable to extract video title" and no title filenames. · Issue #21934 · ytdl-org/youtube-dl · GitHub by antdude (20 points, 12 comments)
  7. Is YouTube blocking VPN IPs now? Getting HTTP error "too many requests" only with VPN by Cronokkio (20 points, 9 comments)
  8. YT banning IP's with "HTTP ERROR 429: Too Many Requests" - I see it's common but how long does it last? by KDE_Fan (19 points, 50 comments)
  9. JackTheVideoRipper is a new GUI for youtube-dl on Windows 10 by 404WebUserNotFound (19 points, 11 comments)
  10. Unable to download videos from Youtube. by hlloyge (18 points, 9 comments)

Top Comments

  1. 16 points: werid's comment in Why in the name of god is -f best not default behavior
  2. 15 points: Empyrealist's comment in Does Youtube know you're using youtube-dl?
  3. 10 points: d1ckh3ad69's comment in YouTube said: This video is unavailable. | all videos
  4. 8 points: CrypterMKD's comment in Is there any way to download a YouTube video in it's original quality as if you were downloading it before it was uploaded to YouTube (before the compression)?
  5. 8 points: DiamondMiner88_'s comment in YouTube will update their ToS on December 10th
  6. 8 points: Empyrealist's comment in Can videos I have purchased in YouTube be downloaded using youtube-dl?
  7. 8 points: atlantis69's comment in Impossible to download some videos (HTTP Error 403: Forbidden)
  8. 8 points: d1ckh3ad69's comment in Why isn't the best audio .webm but .opus?
  9. 8 points: qwertz19281's comment in Help me with command line
  10. 7 points: InglobeDreaming's comment in Having trouble trying to download an xxx video
Generated with BBoe's Subreddit Stats
submitted by subreddit_stats to subreddit_stats [link] [comments]

Which are the top computer science websites students must visit?

Computer science is the investigation of PCs and processing ideas. It incorporates both hardware and software, just as network administration and the Internet.
The hardware part of computer science covers with electrical engineering. It includes the essential design of PC and the manner in which they work.
A crucial comprehension of how a PC “computes,” or performs calculations, gives the establishment of understanding further developed ideas. For instance, seeing how a PC works in binary enables you to see how PCs include, subtract, and perform different activities. Finding out about rational doors empowers you to understand processor engineering.
The software side of computer science spreads programming ideas just as specific programming languages. Programming ideas incorporate functions, calculations, and source code plan. Computer science additionally covers compilers, working frameworks, and software applications. Client-centered parts of computer science include PC graphics and user interface plan.
Since almost all PCs are currently associated with the Internet, the computer science umbrella spreads Internet advances also. This incorporates Internet conventions, media communications, and systems administration ideas. It likewise includes viable applications, for example, website design and system organization.

Why pursuing computer science stream?

Table of Contents
The most important part of computer science is critical thinking or problem solving, a fundamental ability forever. Students study the design, improvement, and examination of software and hardware used to tackle issues in an assortment of business, logical, and social settings.
Since PCs take care of issues to serve individuals, there is a huge human side to computer science also.

Reasons to study computer science:

The Association for Computing Machinery (ACM) is a worldwide association for PC researchers. The ACM has built up the accompanying list of some reasons to study computer science which we quote:

The top website which computer science student should visit:

1 Stanford engineering everywhere:

Stanford Engineering Everywhere is a free asset intended to give students over the U.S. with access to a portion of the courses and instruments utilized by Stanford students to ace the basics of computing, artificial intelligence, and electrical engineering.
These materials are additionally accessible to teachers for use in classroom settings and are secured under a Creative Commons permit that guarantees they are uninhibitedly open to anybody with a PC and an Internet association.

2 Tutorials point:

If you have an enthusiasm for computer science and need to upgrade your abilities then tutorials point is the site for you. It’s an extremely famous site which offers extraordinary instructional exercises on a wide assortment of programming languages.
In this way, if you are only a beginner with no sign, at that point this site has every help available for you. It’s the best library will give you more than your expectations. The best part about this site is that it has online ide or you can say a word processor to alter codes, to assemble and run them.

3 W3schools:

The next is w3schools. Numerous specialists state if you need to turn into a fruitful web designer; at that point, there is nothing superior to W3schools. It is the most favored site for learning web advancement over us, and it’s completely free of expense. Even though you can pay for certifications.
The instructional exercises on this site offer tens and several models and references for better learning and experimentation. Likewise, numerous capable web designers were begun with the assistance of this site.
It has a wide scope of instructional exercises on Html, CSS, PHP, javascript, jquery, and a few systems too, for example, bootstrap. Not just that this site likewise has its very own online supervisor, which gives you a chance to attempt code online with no stress of establishment of the word processors independently.

4 Geeks for geeks:

This site is somewhat interesting on our list.
Geeks for Geeks offer programming languages as well as gets ready students who are going for interviews that are identified with computer science employment. The site gives all sort of arrangements extending from easiest to specialized ones.
Like Tutroialspoint and w3schools, it additionally offers you a completely practical idea or word processor which let you alter code all the more effectively. C, Python, and Java are the principal programming languages which are covered in this site.

5 Quora:

It is not a tutorial site. It is utilized to address the inquiries made by the overall population. We can likewise say that it’s fundamentally an inquiry/answer driven site, and you may discover the response to the inquiries identified with your or some other industry.
By this, we imply that it likewise has questions identified with the computer science field, which may enable you to understand your inquiries excellently and proficiently.
The explanation for its fifth position on our list is that it has an enormous network of software engineers and designers which eagerly attempt to address the inquiries made by you or the overall population. As we would see it, if you have any question, at that point you should put it on Quora.
Individuals who have the appropriate response will offer their responses, and you can likewise be able to like, remark or even upvote that answer. A few people additionally state that if you have an inquiry, at that point, Quora it, which demonstrates the intensity of this site.

6 Stackoverflow:

If you need an option that is superior to Quora, at that point StackOverflow will do it for you. This site has the greatest network of software engineers and designers the whole way across the globe. They share their issues and get a versatile arrangement from the accomplished engineers and software engineers who come and answer the issues energetically.
The StackOverflow is an excellent site to the individuals who committed errors in code and not found any answer for their code. It’s an incredibly phenomenal site, which targets computer science points.
So we prescribe you to visit it, in any event, two times each day to exceed expectations in computer science regardless of whether you are a starter or a middle of the road.
Since specialists do that. So if you are writing code and stalled out, put your issue forward on StackOverflow and pause. Before long, somebody from the network will enable you to out in the manner to fix it. It’s as straightforward as that.

7 Youtube:

All things considered, YouTube needs no presentation.
You can learn anything from a place without spending any single coin. This site is the world’s second-biggest web index on the web.
It does not just offer instructional exercises, truth be told, it has billions of recordings on different points too. For example, innovation, motion pictures, games, and so on. In this way, you can likewise utilize it as a wellspring of excitement if you need.
This suggests it as a total bundle to appreciate learning just as fun. Here, you will discover recordings identified with any point. You are running right from antiquated things to advanced science.
YouTube has everything to feature. It is additionally called the second home for makers. It is the greatest stage for instructional exercises too. As indicated by some review, YouTube has the most significant measure of recordings in a computer science theme.
It additionally has every one of the recordings identified with any point, for example, a motion picture, tech audit, programming, and so forth. What’s more, the best part is you can look through your preferred theme, and you will have the consequence of excellent free recordings there.
So good karma with that.

8 JavaTpoint:

While offering a few instructional exercises on programming languages, it additionally provides instructional practices on different issues identified with PC and current innovation.
This is something novel which isn’t offered by some other site yet. The site has instructional exercises on practically all the programming languages, including the most up to date ones also.

Importance of computer science:

Computer science benefits society in numerous ways:

1 Encouraging education:

Would you be able to envision present-day education without PC programming or the web? Regardless of whether you’re taking a web-based class, looking into for a paper or sharing work utilizing the cloud, computer science advantage has helped make this conceivable.
E-learning stages and applications give students new devices to issue solving and study, which has changed the academic world. The capacity to take classes online is additionally a significant advantage for the world—as it makes access to education for students whose areas, functions, or funds were a boundary.

2 Growing communication:

“The greatest role computer science has made is in the field of communication,” computer science has made the entire world a tiny spot—accessible readily available at this point.” Internet-based life, video calling, and chatting applications—even the applications that enable you to share records and photographs with another person long-distance. These limits have altered the workforce.

3 Accelerating healthcare progress:

Human care services will, in general, be a quite high need when you think about how to improve individuals’ lives.
One of the most energizing features of computer science is its capacity to improve and quicken each other field. “Information science and artificial intelligence (AI) as subsets of software engineering enable individuals and associations to quicken and ‘prepackage thought’. Along these lines, software engineering and human-made brainpower can make some other control many, commonly better.”

4 Positively affecting each territory of society:

Computer science makes its presence in every field. Without a computer, there is every work is incomplete. Computer science is a good profession and provides benefits to our society in several ways.

Popular jobs in computer science field:

Web Developer: A web engineer makes and looks after sites. The duties can run from making straightforward sites for eateries to out and out web applications for new companies.
Money Programmer: A money developer manages the always showing signs of change universe of bank transactions. They manage the projects and codes that deal with the large number of dollars a bank may process in multi-day. Being a software engineer for an accounting firm can be a distressing activity; however, the compensation is the reward.
Game Developer: Game engineers make the games that fuel the consistently developing game industry. Regardless of whether it’s for portable game applications or comfort gaming, there is a huge interest for qualified game developers. This field is troublesome, with information about complicated math a need.

Some Computer Science Assignment Help Websites:

These websites are popular for providing assignment help-
Codeavail
Calltutors
JavaAssignmentHelp
Coursementor
Also, read…
How Do I Complete My Programming Assignment In Short Time?
Who can provide highly Professional Programming Assignment Help?
How to get the best cheap Computer Science Homework Help?

Conclusion:

While getting a computer science degree students face difficulties in managing a balance between assignment submission and academic syllabus completion. So our codeavail site provides computer science assignment help whenever you need.
Above discussion made it clear that computer science has great importance in every field. For education purpose, there are several sites available from which a student can get help on their related topics and get help in completion of computer science assignment.
Submit your requirements or queries here now.
submitted by codeavail_expert to computersciencehub [link] [comments]

Binary options trading method  Tutorial for beginners NEW 1 Minute binary options trading strategy  tutorial ... Binary options tutorial  Trading tutorial for the beginners 2019 Binary Options Trading for Beginners 2020! - YouTube BINARY OPTIONS TRADING FOR BEGINNERS - How to Find Best ... BINARY OPTIONS TUTORIAL - YouTube Binary options tutorial for beginners Binary Options Tutorial For Beginners - YouTube

Binary options trading is an excellent financial tool for both beginners and advanced traders alike. In this binary options course: binary options basics, binary options alternatives, binary option brokers, binary option trading strategies and more.. A binary option is a financial opportunity that offers investors a fixed price and a fixed ... This page is more a basic overview of what is going on when talking about binary options. Begin Trading Binary Options Online Tutorial; Trading Binary Options For Dummies. Anyone can trade binary options. Even a dummy can win any given binary trade, too. It is one or the other choice, it is hard to get it that wrong all of the time. Avoid making beginner mistakes and learn how to start trading binary options with this short step-by-step tutorial. This binary options guide for beginners outlines the most important steps towards trading in a simple and straightforward manner. Our binary options for beginners section contains all the most important lessons you should know before entering the market. Sure, it may take a little time to go through them all, but it will be time well spent because your chances for success will improve dramatically. With this guide, you will learn how to choose the right broker, what drives the market, which trades to open and much, much ... Binary options tutorial. Binary options tutorial deals with the necessary steps you need to perform in order to place a bid and trade binary options. If you are interested in gaining more perspective on the process before you start to trade, you can use our “Binary options for beginners” guide. If you are interested in the advanced article [...] Continue Reading. Binary options tips ... "Beginners Binary Options Training Course" is about learning the fundamentals of binary options trading. I have created this course specifically for people who are new to binary options trading and/or the financial markets as a whole. The course has been structured so that each lesson builds from the previous, so I recommend starting from the first lesson and continue through from there. Who ... Binary Options Academy for Beginners – Binary Option Trading 101 . Education and experience are the main constituents that ensure success in any venture. The same holds true for binary options trading as well, unless you are among the lucky few individuals who can make a considerable amount of profits out of sheer fortune alone. However, don’t entrust your savings on blind luck, as even ...

[index] [5061] [10501] [18807] [22786] [22330] [29058] [1900] [19321] [29212] [478]

Binary options trading method Tutorial for beginners

I hope this video was useful for you! Have a good trades 😉 _____ 👨🏽‍💻 Broker I've been using for the last few years : 👉 https://pocketoption-... Binary options tutorial for beginners Please subscribe to the channel for daily binary options content : https://www.youtube.com/channel/UCD4i... Playlist of... BINARY OPTIONS TUTORIAL Trading tutorial for the beginners 2019 Hello Everyone, in this video I will show binary options tutorial. Trading tutorial for the beginners. Trusted spots blog https://trustedspots1.blogspot.com/?m=1 To register a free account on desktop or laptop, click here https://bit.ly/3ghvlt5 To register a f... 💰💲FULL BEGINNER? Join My PERSONAL TRAINING!💴💵 BLW Trading Academy: http://www.blwtradingacademy.com/ Live Trading Signals HERE!🔙💲💹Join My ... Binary options trading method Binary options tutorial for beginners. Category People & Blogs; ... Binary options trading strategy tutorial review signals - Duration: 8:52. MERC TRADING ... BINARY OPTIONS TRADING FOR BEGINNERS - How to Find Best Binary Options Strategy ️ TRADE ON DEMO http://iqopts.com/start ️ TRADE ON REAL MONEY http://iq... Hi! This is my binary options video blog. I will show you how I earn money on binary options with simple binary options strategy 100% payouts, no account required, no minimum deposits, trade using cryptocurrency. More info at: https://www.stixex.io/landing

http://binary-optiontrade.dirdupen.ml