Main commands and configs

Commands:

service redis-server start
service mongodb start
service ssh --full-restart
source /usr/local/rvm/scripts/rvm

Packages:

  • AngularJS
  • BracketHighlighter
  • LESS
  • Package Control
  • SFTP
  • ColorPicker

Config:

{
 "bold_folder_labels": true,
 "caret_style": "phase",
 "color_scheme": "Packages/Color Scheme - Default/Monokai.tmTheme",
 "enable_tab_scrolling": false,
 "fade_fold_buttons": false,
 "fallback_encoding": "Cyrillic (Windows 1251)",
 "font_size": 13,
 "highlight_line": true,
 "highlight_modified_tabs": true,
 "ignored_packages":
 [
 "RubyTest",
 "SublimeREPL",
 "Vintage"
 ],
 "line_padding_bottom": 1,
 "line_padding_top": 1,
 "show_encoding": true,
 "tab_size": 2,
 "translate_tabs_to_spaces": true,
 "update_check": false,
 "word_wrap": true
}

XnView on Ubuntu

sudo apt-get update
sudo apt-get install gdebi
sudo apt-get install gstreamer-tools
sudo apt-get install ubuntu-restricted-extras
sudo apt-get install libgstreamer-plugins-base0.10-0

After install XnView for Linux with package.

service mongodb in Ubuntu 16

link

So I had to write one from scratch. To create one of your own follow these steps:

  1. switch to root using
    sudo su
    

or use sudo for all the following steps.

  1. create a service script (in this example the name of the service is Mongodb)
    nano /lib/systemd/system/mongodb.service
    
  2. File content should be
    [Unit]
    Description=MongoDB Database Service
    Wants=network.target
    After=network.target
    
    [Service]
    ExecStart=/usr/bin/mongod --config /etc/mongod.conf
    ExecReload=/bin/kill -HUP $MAINPID
    Restart=always
    User=mongodb
    Group=mongodb
    StandardOutput=syslog
    StandardError=syslog
    
    [Install]
    WantedBy=multi-user.target
    

You can also download the file from here: mongodb.service
Here is a quick description of the important fields:
ExecStart - Is the command to run. Mongo installs itself under /usr/bin and the configuration file is written at /etc
User - The uid of the mongod process.
Group - The gid of the mongod process. Note that the user and group are created by the installation.

Now to start mongodb:

sudo systemctl start mongodb

To stop mongodb service use:

sudo systemctl stop mongodb

To enable mongodb on startup

sudo systemctl enable mongodb.service

If you need to refresh the services use:

 sudo systemctl daemon-reload

 

Vagrantfile

https://github.com/winnfsd/vagrant-winnfsd

# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The «2″ in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don’t change it unless you know what
# you’re doing.
Vagrant.configure(2) do |config|
# The most common configuration options are documented and commented below.
# For a complete reference, please see the online documentation at
# https://docs.vagrantup.com.

# Every Vagrant development environment requires a box. You can search for
# boxes at https://atlas.hashicorp.com/search.
config.vm.box = «base»

# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false

config.ssh.forward_agent = true
config.ssh.username = ‘vagrant’
config.ssh.password = ‘vagrant’

# Create a forwarded port mapping which allows access to a specific port
# within the machine from a port on the host machine. In the example below,
# accessing «localhost:8080″ will access port 80 on the guest machine.
# config.vm.network «forwarded_port», guest: 80, host: 8080
config.vm.network «forwarded_port», guest: 3003, host: 3003

# Create a private network, which allows host-only access to the machine
# using a specific IP.
# config.vm.network «private_network», ip: «192.168.33.10″

# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
# config.vm.network «public_network»
config.vm.network «private_network», type: «dhcp»

# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
# Before must run: vagrant plugin install vagrant-winnfsd
config.vm.synced_folder «../../yd/sites», «/sites», type: «nfs»
config.vm.synced_folder «../../soft/vagrant_folder», «/vagrant_folder», type: «nfs»

# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:

config.vm.provider «virtualbox» do |vb|
# Display the VirtualBox GUI when booting the machine
# vb.gui = true

# Customize the amount of memory on the VM:
vb.memory = «10024″

vb.cpus = 4
end

# View the documentation for the provider you are using for more
# information on available options.

# Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
# such as FTP and Heroku are also available. See the documentation at
# https://docs.vagrantup.com/v2/push/atlas.html for more information.
# config.push.define «atlas» do |push|
# push.app = «YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME»
# end

# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
# config.vm.provision «shell», inline: <<-SHELL
# sudo apt-get update
# sudo apt-get install -y apache2
# SHELL
end

plink proxy

plink -ssh адрес_SSH_сервера -C -N -l логин -pw пароль -D 127.0.0.1:8081

After for:

Browsers: в графу «Узел SOCKS» укажите адрес 127.0.0.1, порт 8081, поставьте галочку «SOCKS5»

Programs: FreeCap 

SSD vs SAS

link

Имеем два подопытных кролика: 2x SAS, hw raid1 и 2x SSD, hw raid1. Машины идентичны:
CPU 2x AMD Opteron 6164 HE 12-Core
RAM 16x Gigabyte RAM
RAID-Controller HP SmartArrayP410
Barebone Hewlett Packard DL165 G7

SAS
Vendor: SEAGATE 
Product: ST3300657SS
User Capacity: 300 GB
Logical block size: 512 bytes

SSD
Product: Samsung 840 Pro
User Capacity: 128 GB
Logical block size: 512 bytes

Тест 1: линейная запись

Код:
dd if=/dev/zero of=/tmp/testfile.bin bs=256k count=2048

SAS: 717 MB/s
SSD: 682 MB/s

Тест 2: линейное чтение

Код:
dd of=/dev/null if=/tmp/testfile.bin bs=512k count=1024

SAS: 1.7 GB/s
SSD: 2.1 GB/s

Тест 3:

Код:
hdparm -tT /dev/sda

Timing cached reads:*SAS 2438.26 MB/s; SSD 2484.01 MB/s
Timing buffered disk reads: SAS 2293.67 MB/s; SSD 219.26 MB/s

Тест 4: fio

Код:
[global]
bs=4k
size=256M
filename=test.file
direct=1
buffered=0
ioengine=libaio
iodepth=16

[seq-read]
rw=read
stonewall
name=Sequential reads

[rand-read]
rw=randread
stonewall
name=Random reads

[seq-write]
rw=write
stonewall
name=Sequential writes

[rand-write]
rw=randwrite
stonewall
name=Random writes

Sequential reads: SAS 161.82 MB/s 41426 iops; SSD 120.3 MB/s 30796 iops
Random reads: SAS 245.68 MB/s 62894 iops; SSD 125.8 MB/s 32204 iops
Sequential writes: SAS 225.75 MB/s 57791 iops; SSD 202.85 MB/s 51930 iops
Random writes: SAS 218.43 MB/s 55918 iops; SSD 187.82 MB/s 48082 iops

Тест 5: линейная запись (с другими параметрами)

Код:
dd if=/dev/zero of=testfile bs=64k count=16k conv=fdatasync

SAS: 205 MB/s
SSD: 61.8 MB/s

Тест 6: линейное чтение после очистки кэша

Код:
echo 3 > /proc/sys/vm/drop_caches
dd if=testfile of=/dev/null bs=64k

SAS: 389 MB/s
SSD: 192 MB/s

Тест 7: Unixbench
File Copy 1024 bufsize 20000 maxblocks: SAS 452 MB/s; SSD 477.71 MB/s
File Copy 256 bufsize 500 maxblocks: SAS 130.52 MB/s; SSD 136.72 MB/s
File Copy 4096 bufsize 8000 maxblocks: SAS 804.79 MB/s; SSD 686.22 MB/s
System Banchmarks Index Score: SAS 645.7; SSD 637.3

Тест 8: мои наработки
SAS: 628.14 MB/s, 40200.95 request/s
SSD: 552.33 MB/s, 35348.85 requests/s

Последние два теста привел только для сравнения. Приветствуются ваши варианты тестов дисковых систем но только понятными, открытыми тестами, а не с «птичками».

Искренне прошу поделится вашими тестами — самими командами. Теория не интересует вовсе, только практические понятные тесты. Ну и разумеется ваше мнение на этот счет. Если считаете что указанные выше тесты некорректны — укажите почему и ваш вариант тестов.

Очень прошу не гадить в теме постами: «ssd лучше», «а птички битрикса показывают другое».

Полагаю тема будет полезна всем, а именно ее конечный результат.

Далее продолжу тесты с разными параметрами утилиты fio.

Однопоточный тест fio для SAS
Sequential reads: bw=7196.3KB/s, iops=14392
Random reads: bw=7009.4KB/s, iops=14018
Sequential writes: bw=6800.2KB/s, iops=13601
Random writes: bw=6209.0KB/s, iops=12418

Однопоточный тест fio для SSD
Sequential reads: bw=3045.1KB/s, iops=6091
Random reads: bw=1604.2KB/s, iops=3209
Sequential writes: bw=3059.5KB/s, iops=6118
Random writes: bw=3067.2KB/s, iops=6134

SAS опережает SSD в два раза.

===============================

Тест fio в 24 потока для SAS
Sequential reads: bw=31256KB/s, iops=62511
Random reads: bw=29228KB/s, iops=58455
Sequential writes: bw=28823KB/s, iops=57645
Random writes: bw=27450KB/s, iops=54899

Тест fio в 24 потока для SSD
Sequential reads: bw=25881KB/s, iops=51761
Random reads: bw=23734KB/s, iops=47468
Sequential writes: bw=26600KB/s, iops=53200
Random writes: bw=25136KB/s, iops=50272

Получается что с ростом числа потоков SAS и SSD почти сравнились.

Однопоточный тест fio для SAS, 10 тыс. файлов общим весом 255MB
Sequential reads: bw=1901.4KB/s, iops=3802
Random reads: bw=3915.8KB/s, iops=7830
Sequential writes: bw=4358.4KB/s, iops=8716
Random writes: bw=575671B/s, iops=1124

Однопоточный тест fio для SSD, 10 тыс. файлов общим весом 255MB
Sequential reads: bw=1189.9KB/s, iops=2379
Random reads: bw=2080.8KB/s, iops=4160
Sequential writes: bw=2494.1KB/s, iops=4988
Random writes: bw=1893.8KB/s, iops=3787

======================================================

Тест fio в 24 потока для SAS, 10 тыс. файлов общим весом 255MB
Sequential reads: bw=2466.5KB/s, iops=4932
Random reads: bw=5280.9KB/s, iops=10560
Sequential writes: bw=18156KB/s, iops=36311
Random writes: bw=768665B/s, iops=1501

Тест fio в 24 потока для SSD, 10 тыс. файлов общим весом 255MB
Sequential reads: bw=15546KB/s, iops=31092
Random reads: bw=18836KB/s, iops=37672
Sequential writes: bw=17351KB/s, iops=34702
Random writes: bw=11904KB/s, iops=23807

Вот тут у SAS’а начались проблемы.

Однопоточный тест fio разными блоками (от 512б до 16Кб) для SAS
Sequential reads: bw=7196.3KB/s, iops=14392
Random reads: bw=7009.4KB/s, iops=14018
Sequential writes: bw=6800.2KB/s, iops=13601
Random writes: bw=6209.0KB/s, iops=12418

Однопоточный тест fio разными блоками (от 512б до 16Кб) для SSD
Sequential reads: bw=43603KB/s, iops=5298
Random reads: bw=15858KB/s, iops=2690
Sequential writes: bw=42736KB/s, iops=5193
Random writes: bw=31301KB/s, iops=5293

SSD не любит разброс размера блоков? Какова вероятность этого?

=================================================

Тест fio в 24 потока разными блоками (от 512б до 16Кб) для SAS
Sequential reads: bw=458294KB/s, iops=55692
Random reads: bw=322044KB/s, iops=54643
Sequential writes: bw=410884KB/s, iops=49931
Random writes: bw=286496KB/s, iops=48451

Тест fio в 24 потока разными блоками (от 512б до 16Кб) для SSD
Sequential reads: bw=395391KB/s, iops=48048
Random reads: bw=268590KB/s, iops=45573
Sequential writes: bw=408324KB/s, iops=49619
Random writes: bw=241830KB/s, iops=40897

=================================================

Исходя из всех тестов делаю заключение, что SSD имеет преимущество только если в системе огромное число файлов, которые обрабатываются одновременно. Никаких других преимуществ не вижу.

———- Добавлено 05.10.2013 в 09:46 ———-

Беремся потихоньку за MySQL

Подготовим innoDB таблицу на 500 тыс. записей и выполним тест в 128 поток выполнил 100 тыс. запросов.

Код:
sysbench --test=oltp --mysql-table-engine=innodb --oltp-table-size=500000 --db-driver=mysql prepare
sysbench --test=oltp --num-threads=128 --max-requests=100000 --oltp-table-size=500000 --db-driver=mysql run

SAS

Цитата:
transactions: 100046 (3718.79 per sec.)
deadlocks: 1 (0.04 per sec.)
read/write requests: 1900890 (70657.54 per sec.)
other operations: 200093 (7437.61 per sec.)
total time: 26.9029s

SSD

Цитата:
transactions: 100032 (2709.75 per sec.)
deadlocks: 0 (0.00 per sec.)
read/write requests: 1900608 (51485.16 per sec.)
other operations: 200064 (5419.49 per sec.)
total time: 36.9156s

SSD пасет задних.

==============================================
Аналогичный тест для MyISAM но с параметрами таблиц попроще:
SAS

Цитата:
transactions: 1000 (126.60 per sec.)
read/write requests: 19000 (2405.31 per sec.)
other operations: 2000 (253.19 per sec.)
total time: 7.8992s

SSD

Цитата:
transactions: 1000 (123.59 per sec.)
read/write requests: 19000 (2348.17 per sec.)
other operations: 2000 (247.18 per sec.)
total time: 8.0914s

Ну попробуйте убедить меня что теория права. Пока только один тест это подтвердил и то это должен быть какой то файловый хостинг или в этом роде.

block in skype

link

1 step. in skype setings

2 step. block sites in IE:

https://apps.skype.com

https://rad.msn.com


https://api.skype.com


https://static.skypeassets.com


https://adriver.ru
3. block url in C:\Windows\System32\drivers\etc
127.0.0.1 rad.msn.com
127.0.0.1 apps.skype.com
127.0.0.1 api.skype.com
127.0.0.1 static.skypeassets.com
127.0.0.1 adriver.ru
127.0.0.1 devads.skypeassets.net
127.0.0.1 devapps.skype.net
127.0.0.1 qawww.skypeassets.net
127.0.0.1 qaapi.skype.net
127.0.0.1 preads.skypeassets.net
127.0.0.1 preapps.skype.net
127.0.0.1 serving.plexop.net
127.0.0.1 preg.bforex.com
127.0.0.1 ads1.msads.net
127.0.0.1 flex.msn.com

find files between two dates

link

It’s two steps but I like to do it this way:

First create a file with a particular date/time. In this case, the file is 2008-10-01 at midnight

touch -t 0810010000/tmp/t

Now we can find all files that are newer or older than the above file (going by file modified date. You can also use -anewer for accessed and -cnewer file status changed).

find /-newer /tmp/t
find /-not -newer /tmp/t

You could also look at files between certain dates by creating two files with touch

touch -t 0810010000/tmp/t1
touch -t 0810011000/tmp/t2

This will find files between the two dates & times

find /-newer /tmp/t1 -and -not -newer /tmp/t2

Other Example

link

The syntax is as follows:

ls -l  | grep 'yyyy-mm-dd'
ls -l | grep --color=auto '2006-01-05'

Where,

  • 2006 – Year
  • 01 – Month
  • 05 – Day

You can sort it as follows:

ls -lu | grep --color=auto '2006-01-05'

find Command Example

If you need a specific date range many days ago, than consider using the find command. In this example find files modified between Jan/1/2007 and Jan/1/2008, in /data/images directory:

touch --date "2007-01-01" /tmp/start
touch --date "2008-01-01" /tmp/end
find /data/images -type f -newer /tmp/start -not -newer /tmp/end

You can save list to a text file called output.txt as follows:

find /data/images -type f -newer /tmp/start -not -newer /tmp/end > output.txt

List ALL *.c File Accessed 30 Days Ago

Type the following command

find /home/you -iname "*.c" -atime -30 -type -f

See also:

You need to use the find command. Each file has three time stamps, which record the last time that certain operations were performed on the file:

 

[a] access (read the file’s contents) – atime

[b] change the status (modify the file or its attributes) – ctime

[c] modify (change the file’s contents) – mtime

You can search for files whose time stamps are within a certain age range, or compare them to other time stamps.

You can use -mtime option. It returns list of file if the file was last accessed N*24 hours ago. For example to find file in last 2 months (60 days) you need to use -mtime +60 option.

  • -mtime +60 means you are looking for a file modified 60 days ago.
  • -mtime -60 means less than 60 days.
  • -mtime 60 If you skip + or – it means exactly 60 days.

So to find text files that were last modified 60 days ago, use
$ find /home/you -iname "*.txt" -mtime -60 -print

Display content of file on screen that were last modified 60 days ago, use
$ find /home/you -iname "*.txt" -mtime -60 -exec cat {} \;

Count total number of files using wc command
$ find /home/you -iname "*.txt" -mtime -60 | wc -l

You can also use access time to find out pdf files. Following command will print the list of all pdf file that were accessed in last 60 days:
$ find /home/you -iname "*.pdf" -atime -60 -type -f

List all mp3s that were accessed exactly 10 days ago:
$ find /home/you -iname "*.mp3" -atime 10 -type -f

There is also an option called -daystart. It measure times from the beginning of today rather than from 24 hours ago. So, to list the all mp3s in your home directory that were accessed yesterday, type the command
$ find /home/you -iname "*.mp3" -daystart -type f -mtime 1

Where,

  • -type f – Only search for files and not directories

-daystart option

The -daystart option is used to measure time from the beginning of the current day instead of 24 hours ago. Find out all perl (*.pl) file modified yesterday, enter:

find /nas/projects/mgmt/scripts/perl -mtime 1 -daystart -iname "*.pl"

You can also list perl files that were modified 8-10 days ago, enter:
To list all of the files in your home directory tree that were modified from two to four days ago, type:

find /nas/projects/mgmt/scripts/perl -mtime 8 -mtime -10 -daystart -iname "*.pl"

-newer option

To find files in the /nas/images directory tree that are newer than the file /tmp/foo file, enter:

find /etc -newer /tmp/foo

You can use the touch command to set date timestamp you would like to search for, and then use -newer option as follows

touch --date "2010-01-05" /tmp/foo
# Find files newer than 2010/Jan/05, in /data/images
find /data/images -newer /tmp/foo

Read the man page of find command for more information:
man find

 

kill process with grep

kill $(ps -e | grep dmn | awk '{print $1}')

ps -efw | grep dmn | grep -v grep | awk '{print $2}'| xargs kill