免責聲明

Disclaimer (免責聲明)
繼續閱覽代表您接受以上的免責聲明.
To continue reading means you accept the above disclaimer.

2015年4月29日 星期三

mysql phpmyadmin export ERD to pdf


ERD : entity relationship diagram

//=== phpmyadmin 4.3.13
* first, select the target database on the left-side panel

* on the top horizontal menu bar --> right most "More" --> Designer
then
on the 2nd horizontal menu bar --> right most icon --> Export schema



//=== older version of phpmyadmin
* first, select the target database on the left-side panel
* on the top horizontal menu bar --> "Operations"
then
click the link "Edit or export relational schema" present at the page bottom

2015年4月27日 星期一

mysql query cache


* mysql query cache

//=== https://dev.mysql.com/doc/refman/5.1/en/query-cache.html

"""...
The query cache stores the text of a SELECT statement together with the corresponding result that was sent to the client. If an identical statement is received later, the server retrieves the results from the query cache rather than parsing and executing the statement again.

...
The query cache does not return stale data. When tables are modified, any relevant entries in the query cache are flushed.

...
To verify that enabling the query cache is beneficial, test the operation of your MySQL server with the cache enabled and disabled

..."""


//====
http://www.percona.com/blog/2015/01/02/the-mysql-query-cache-how-it-works-and-workload-impacts-both-good-and-bad/

roughly speaking, query cache === (query, result) dictionary

The MySQL query cache is a global one shared among the sessions.
It caches the select 'query' along with the 'result' set


mysql> SHOW STATUS LIKE "qcache%";
+-------------------------+---------+
| Variable_name | Value |
+-------------------------+---------+
| Qcache_free_blocks | 1 |
| Qcache_free_memory | 1031320 |
| Qcache_hits | 0 |
| Qcache_inserts | 0 |


when you trigger a select query,
if it is available in the cache, it fetches from there
otherwise
it considers the query as a new one and will go to the parser.



//=== MySQL query cache has its own downsides

* Anytime you have a “frequently updated table” means you’re probably not going to get any benefit from query cache.

* On circumstances where there is no chance of identical selects and in this case it is important to set the query_cache_size and query_cache_type variable to zero

* On a highly concurrent environment with chances of query cache "mutex", which may become the source of a bottleneck.
Setting the query_cache_type to zero will avoid the query cache mutex,

* Contention often makes query cache the bottleneck instead of help when you have "many CPU cores".

* Generally, query cache should be off unless proven useful for your workload.

* Not to keep the query cache relatively high value at 256 MB as the "Qcache invalidation" becomes costly.

mysql> show variables like 'query_cache_size';
+------------------+---------+

"""...
Qcache_lowmem_prunes is the status variable which indicates how many times MySQL had to clear/prune some data from the Qcache to make space for the outputs of other queries.
..."""







//=== https://dev.mysql.com/doc/refman/5.1/en/query-cache-configuration.html

mysql> SHOW VARIABLES LIKE 'have_query_cache';
mysql> SET GLOBAL query_cache_size = 40000;
mysql> SHOW WARNINGS\G
mysql> SHOW VARIABLES LIKE 'query_cache_size';


//=== https://dev.mysql.com/doc/refman/5.1/en/query-cache-status-and-maintenance.html

mysql> SHOW STATUS LIKE 'Qcache%';

* Com_select === Qcache_inserts + Qcache_not_cached + queries with errors found during the column-privileges check

* total number of select queries === Com_select+ Qcache_hits+ queries with errors found by parser

* FLUSH QUERY CACHE defragment the query cache to better utilize its memory.
* After FLUSH QUERY CACHE, only a single free block remains.
[ but it does not remove any queries from the cache. ]

* RESET QUERY CACHE statement removes all query results from the query cache.
FLUSH TABLES statement also does this.


"""...
Qcache_lowmem_prunes status variable can help you tune the query cache size.
It counts the number of queries that have been removed from the cache to free up memory for caching new queries.
..."""





//=== https://dev.mysql.com/doc/refman/5.1/en/query-cache-operation.html

Query cache is "not" used for the following types:

* Queries that are a subquery of an outer query
* Queries executed within the body of a stored function, trigger, or event
* referring to user-defined functions (UDFs) or stored functions.
* referring to user variables or local stored program variables.
* referring to tables in the mysql or INFORMATION_SCHEMA system database
* referring to any partitioned tables.
* referring to TEMPORARY tables. ?

* SELECT ... LOCK IN SHARE MODE
[Statements within transactions that use SERIALIZABLE isolation level]
* SELECT ... FOR UPDATE
* SELECT ... INTO OUTFILE ...
* SELECT ... INTO DUMPFILE ...
* SELECT * FROM ... WHERE autoincrement_col IS NULL

* not using any tables.
* generating warnings.
* column-level privilege for any of the involved tables.

* containing any of the functions shown below:
[time/date, user/database/connection, lock, rand, encrypt, sleep, ...]
BENCHMARK(), CONNECTION_ID(),
CONVERT_TZ(), CURDATE(), CURRENT_DATE(),
CURRENT_TIME(), CURRENT_TIMESTAMP(), CURTIME(),
DATABASE(), ENCRYPT() with one parameter,
FOUND_ROWS(), GET_LOCK()
LAST_INSERT_ID(), LOAD_FILE(), MASTER_POS_WAIT()
NOW(), RAND(), RELEASE_LOCK(),
SLEEP(), SYSDATE(), UNIX_TIMESTAMP() with no parameters
USER(), UUID(), UUID_SHORT()



//=== https://dev.mysql.com/doc/refman/5.1/en/query-cache-in-select.html
?
SELECT SQL_CACHE id, name FROM customer;
SELECT SQL_NO_CACHE id, name FROM customer;
?


mysql buffer pool (data cache)


//=== https://dev.mysql.com/doc/refman/5.5/en/innodb-buffer-pool.html
* for mysql InnoDB storage engine
* buffer pool: a storage area(memory space) for caching data and indexes
* the larger the buffer pool, the more InnoDB acts like an "in-memory database"
* the buffer pool not only caches read data but also "write data" (caused by insert and update operations), so that disk writes can be grouped together for better performance.



* Under 64-bit systems with large memory sizes, you can split the buffer pool into multiple parts [multiple buffer pool instance] to minimize contention among concurrent operations.


//===

* LRU-variant algorithm [LRU: least recently used algorithm ]
* “midpoint insertion strategy” treats the list as two sublists:
* head sublist for new blocks;
* tail sublist for old blocks
* midpoint is the boundary where head sublist meets tail sublist




//=== system variables control the size of the buffer pool

innodb_buffer_pool_size

innodb_buffer_pool_instances
[ innodb_buffer_pool_size at least 1G ]
(Divides the buffer pool into a user-specified number of separate regions, each with its own LRU list and related data structures, to reduce contention during concurrent memory read and write operations.)


innodb_old_blocks_pct
[ The default value is 37 (that is, 3/8 of the pool). ]

innodb_old_blocks_time
?


//=== https://dev.mysql.com/doc/refman/5.6/en/innodb-information-schema-buffer-pool-tables.html
mysql> SHOW ENGINE INNODB STATUS


mysql> show variables like '%INNODB_BUFFER_POOL%';
+-------------------------------------+----------------+
| Variable_name | Value |
+-------------------------------------+----------------+
| innodb_buffer_pool_dump_at_shutdown | OFF |
| innodb_buffer_pool_dump_now | OFF |
| innodb_buffer_pool_filename | ib_buffer_pool |
| innodb_buffer_pool_instances | 8 |
| innodb_buffer_pool_load_abort | OFF |
| innodb_buffer_pool_load_at_startup | OFF |
| innodb_buffer_pool_load_now | OFF |
| innodb_buffer_pool_size | 627048448 |
+-------------------------------------+----------------+



mysql> SHOW TABLES FROM INFORMATION_SCHEMA LIKE 'INNODB_BUFFER%';
+-----------------------------------------------+
| Tables_in_INFORMATION_SCHEMA (INNODB_BUFFER%) |
+-----------------------------------------------+
| INNODB_BUFFER_PAGE_LRU |
| INNODB_BUFFER_PAGE |
| INNODB_BUFFER_POOL_STATS |
+-----------------------------------------------+
3 rows in set (0.00 sec)



mysql> SELECT DISTINCT PAGE_TYPE FROM INFORMATION_SCHEMA.INNODB_BUFFER_PAGE
WHERE TABLE_NAME IS NULL OR (INSTR(TABLE_NAME, '/') = 0 AND INSTR(TABLE_NAME, '.') = 0);

+-------------------+
| PAGE_TYPE |
+-------------------+
| SYSTEM |
| IBUF_BITMAP |
| INODE |
| IBUF_INDEX |
| INDEX |
| TRX_SYSTEM |
| UNDO_LOG |
| FILE_SPACE_HEADER |
| UNKNOWN |
+-------------------+
9 rows in set (0.20 sec)






mysql binlog_checksum


[Q] how to disable binlog_checksum on mysql master server?

http://dev.mysql.com/doc/refman/5.6/en/replication-options-binary-log.html
*** binlog_checksum has been introduced since mysql 5.6.2 (not available on mysql 5.5)

mysql> show variables like '%checksum';
+---------------------------+-------+
| Variable_name | Value |
+---------------------------+-------+
| binlog_checksum | NONE |
| master_verify_checksum | OFF |
| slave_sql_verify_checksum | ON |
+---------------------------+-------+
3 rows in set (0.01 sec)



***
on ubuntu --> /etc/mysql/my.cnf
on RDS --> DB parameter group


//=== http://mysqlmusings.blogspot.com/2011/04/replication-event-checksum.html

//=== binlog_checksum
binlog_checksum == NONE or CRC32 (CRC-32 checksum === ISO-3309)


"""...
Setting binlog_checksum to NONE means that no checksum is generated,
while setting it to CRC32 means that an ISO-3309 CRC-32 checksum is added to each binary log event.
..."""




//=== master_verify_checksum
"""...
Setting this flag can be useful to verify that the event really written to the binary log is uncorrupted, but it is typically not needed in a replication setting since the slave should verify the event on reception.
..."""

master> SET GLOBAL MASTER_VERIFY_CHECKSUM=1;

//===
slave_sql_verify_checksum
"""...
indicates that the SQL thread should verify the checksum when reading it from the relay log on the slave. Note that this means that the I/O thread writes a checksum to the event written to the relay log, regardless of whether it received an event with a checksum or not.
..."""



//=== SHOW BINLOG EVENTS FROM pos;

"""...
master> SHOW BINLOG EVENTS FROM 261;
+-------------------+-----+------------+-----------+-------------+-----------------------------------------------------------+
| Log_name | Pos | Event_type | Server_id | End_log_pos | Info |
+-------------------+-----+------------+-----------+-------------+-----------------------------------------------------------+
| master-bin.000001 | 261 | Query | 1 | 333 | BEGIN |
| master-bin.000001 | 333 | Intvar | 1 | 365 | INSERT_ID=1 |
..."""


//=== mysqlbinlog --verify-binlog-checksum
$ mysqlbinlog --verify-binlog-checksum master-bin.000001



2015年4月23日 星期四

git checkout and branch



xxx git checkout is a local operation, nothing to do with remote repo xxx


//=== http://git-scm.com/book/en/v2/Git-Branching-Remote-Branches

* Remote branches are references (pointers) to the state of branches in your remote repos.


* Tracking Branches
Checking out a local branch from a remote branch "automatically creates"
a “tracking branch”
(or sometimes an “upstream branch”).

Tracking branches are local branches that have a direct relationship to a remote branch.

Both git checkout -b, git checkout --track will create "tracking branches"


$ git checkout -b [branch] [remotename]/[branch]
(-b implies --track)

$ git checkout --track origin/serverfix
Branch serverfix set up to track remote branch serverfix from origin.
Switched to a new branch 'serverfix'


* To set up a local branch with a different name than the remote branch,

$ git checkout -b mysrvfix origin/serverfix
Branch mysrvfix set up to track remote branch serverfix from origin.
Switched to a new branch 'mysrvfix'


* If you already have a local branch and want to set it to a remote branch,
or want to change the upstream branch you’re tracking, you can use the -u or --set-upstream-to

$ git branch -u origin/serverfix
Branch mylocalBranchCurrent set up to track remote branch serverfix from origin.


* ??? commit == check-in


//=== https://www.atlassian.com/git/tutorials/undoing-changes/git-checkout


//=== targets for git checkout
* files
* commits
* branches



"""...
Checking out a commit makes the entire working directory match that commit.
This can be used to view an old state of your project without altering your current state in any way.

Checking out a file lets you see an old version of that particular file,
leaving the rest of your working directory untouched.

...
unlike checking out a commit[the whole files],
this[check out a particular file] does affect the current state of your project.

The old file revision will show up as a “Change to be committed,”
giving you the opportunity to revert back to the previous version of the file



..."""



# checkout(or return to) the master branch
$ git checkout master

# Check out a particular file from a certain commit.
$ git checkout commit_id filename

# Check out all files from a certain commit.
git checkout commit_id



//=== http://git-scm.com/docs/git-checkout
# create a new branch
$ git checkout -b newbranch

$ git checkout -B nebranch [startpoint]
nebranch is created if it doesn’t exist;
otherwise, it is reset.

"transactional" equivalent to
(the branch is not reset/created unless "git checkout" is successful.)
$ git branch -f nebranch [start point]
$ git checkout nebranch



//=== http://gitready.com/intermediate/2009/01/09/checkout-remote-tracked-branch.html

$ git checkout --track -b branch_name origin/branch_name
or a simpler version:
$ git checkout -t branch_name origin/branch_name


??? always set the "branch.autosetupmerge" configuration flag to true,
so that branch tracking is done automatically when I do a "git checkout -b foo".



Creating a local branch and pushing it to remote is easy:

$ git branch localbranchname
$ git push origin localbranchname

but
there's a complication when you create a local branch / push it to remote
and want that local branch to track the remote branch.


To get the local branch to track the remote branch requires the deletion of the local branch
and pulling it from remote. This can be done with one line:

$ git branch -f localbranchname origin/remotebranchname

You must be on another branch for this to work since you can't delete the branch that you are on.






//=== git clean
https://www.atlassian.com/git/tutorials/undoing-changes/git-clean

"""...
git clean command removes untracked files from your working directory.
This is really more of a convenience command, since it’s trivial to see which files are untracked with git status and remove them manually.

Like an ordinary rm command, git clean is not undoable,
so make sure you really want to delete the untracked files before you run it.

The git clean command is often executed in conjunction with git reset --hard....

..."""



//=== http://git-scm.com/book/en/v2/Git-Branching-Remote-Branches
"""...

If you don’t want to type it every single time you push, you can set up a “credential cache”. The simplest is just to keep it in memory for a few minutes, which you can easily set up by running git config --global credential.helper cache.


..."""



no check-in for git?


[Q] no check-in for git?

DVCS : distributed version control system

//=== http://superuser.com/questions/436347/git-process-could-anyone-please-explain-the-check-in-check-out-process

"""...
There really isn't a 'Check in Check out' process in Git. Those references are for Centralized VCS not Distributed.

... to physically 'see changes' on the remote side after a commit ... You need a hook to broadcast to the remote working directory to pull from the updated repo ...

... Git is an extremely powerful and robust DVCS ... However, you need to fully understand how it works to effectively use it.


..."""

*** Git magic:
http://www-cs-students.stanford.edu/~blynn/gitmagic/index.html

*** Git for website management:
http://toroid.org/ams/git-website-howto



//===
Initially, nothing on local machine:
$ git clone git@github.com: username/reponame.git

if a copy already exists [chk .git subfolder],
no need to clone again, just pull:
$ git pull origin master


//=== ??? to avoid cluttering the history with useless merges
$ git pull --rebase
or
$ git merge branchname --no-ff



//=== https://gittf.codeplex.com/
"""...
Git-TF is a set of cross-platform, command line tools that facilitate sharing of changes between Microsoft Team Foundation Server, Visual Studio Online and Git.

These tools make it easy to clone sources from TFS, fetch updates from TFS, and update TFS with changes committed locally in Git.
..."""

2015年4月21日 星期二

command line to upload file to dropbox


//=== http://stackoverflow.com/questions/14000111/curl-upload-command-using-bash-terminal
[Bash code]
#!/bin/bash
CURL_BIN="/usr/bin/curl"
#Note: This option explicitly allows curl to perform "insecure" SSL connections and transfers.
#CURL_ACCEPT_CERTIFICATES="-k"
CURL_PARAMETERS="--progress-bar"
APPKEY="zrwv8z3bycfk3m8"
OAUTH_ACCESS_TOKEN="aaaaaaaa"
APPSECRET="aaaaaaaaaa"
OAUTH_ACCESS_TOKEN_SECRET="aaaaaaaaa"
ACCESS_LEVEL="dropbox"
API_UPLOAD_URL="https://api-content.dropbox.com/1/files_put"
RESPONSE_FILE="temp2.txt"
FILE_SRC="temp.txt"

$CURL_BIN $CURL_ACCEPT_CERTIFICATES $CURL_PARAMETERS -v -i -o "$RESPONSE_FILE" --upload-file "$FILE_SRC" "$API_UPLOAD_URL/$ACCESS_LEVEL/$FILE_DST?oauth_consumer_key=$APPKEY&oauth_token=$OAUTH_ACCESS_TOKEN&oauth_signature_method=PLAINTEXT&oauth_signature=$APPSECRET%26$OAUTH_ACCESS_TOKEN_SECRET"





[Manual code]

xxx curl --insecure --progress-bar -v -i -o temp2.txt --upload-file temp.txt \
https://api-content.dropbox.com/1/files_put/dropbox/attachments/temp.txt?oauth_consumer_key=aaaaaaaaaa&oauth_token=aaaaaaaaa&oauth_signature_method=PLAINTEXT&oauth_signature=aaaaaaaaa%26aaaaaaaaaa

--> need quote ?
curl --insecure --progress-bar -v -i -o temp2.txt --upload-file temp.txt \
"https://api-content.dropbox.com/1/files_put/dropbox/attachments/temp.txt?oauth_consumer_key=aaaaaaaaaa&oauth_token=aaaaaaaaa&oauth_signature_method=PLAINTEXT&oauth_signature=aaaaaaaaa%26aaaaaaaaaa"


command line to create repo


instead of using web browser,
use command line shell to create repo on remote git server
-- bitbucket or github


//=== bitbucket
http://stackoverflow.com/questions/13788485/create-new-repo-on-bitbucket-from-git-bash-terminal

$ curl --user username:passwd https://api.bitbucket.org/1.0/repositories/ \
--data name=REPO_NAME \
--data is_private='true' \
--data owner=team_name \

git remote add bitbucket git@bitbucket.org:username/reponame.git
# git push -u bitbucket --all
# git push -u bitbucket --tags
$ git push -u bitbucket master


//=== github
http://stackoverflow.com/questions/2423777/is-it-possible-to-create-a-remote-repo-on-github-from-the-cli-without-ssh

curl -u 'user13' https://api.github.com/user/repos -d '{"name":"phonecat1"}'

[https git]
git remote add github https://github.com/user13/phonecat1.git

[ssh git]
git remote add github git@github.com:user13/phonecat1.git

git push -u github master


2015年4月20日 星期一

mysql Time_Format()


//=== http://www.techonthenet.com/mysql/functions/time_format.php
%f Microseconds (000000 to 999999)
%f is available starting in MySQL 4.1.1
%H Hour (00 to 23 generally, but can be higher)
%h Hour (00 to 12)
%I Hour (00 to 12)
%i Minutes (00 to 59)
%p AM or PM
%r Time in 12 hour AM or PM format (hh:mm:ss AM/PM)
%S Seconds (00 to 59)
%s Seconds (00 to 59)
%T Time in 24 hour format (hh:mm:ss)

2015年4月16日 星期四

js get filename/basename from a path


//=== http://stackoverflow.com/questions/3820381/need-a-basename-function-in-javascript

myutil.js

function getfilename(path) {
return path.split(/[\\/]/).pop();
}

function getbasename(path) {
var base = new String(path).substring(path.lastIndexOf('/') + 1);
var pos=base.lastIndexOf(".");
if(pos != -1)
base = base.substring(0, pos);
return base;
}

module.exports = {
filename: getfilename,
basename: getbasename
};


curl post, get


//=== http://stackoverflow.com/questions/14978411/http-post-and-get-using-curl-in-linux


GET with JSON:
curl -i -H "Accept: application/json" -H "Content-Type: application/json" -X GET http://hostname/resource

GET with XML:
curl -H "Accept: application/xml" -H "Content-Type: application/xml" -X GET http://hostname/resource


POST for posting data:
curl --data "param1=value1¶m2=value2" http://hostname/resource


POST for file upload:
curl --form "fileupload=@filename.txt" http://hostname/resource


RESTful HTTP Post:
curl -X POST -d @filename http://hostname/resource


For logging into a site (auth):
curl -d "username=admin&password=admin&submit=Login" --dump-header headers http://localhost/Login
curl -L -b headers http://localhost/



//===
GET approach with JSON result
curl -i -H "Accept: application/json" http://someHostName/someEndpoint | grep }| python -mjson.tool


POST approach with JSON result
curl -X POST -H "Accept: Application/json" -H "Content-Type: application/json" http://someHostName/someEndpoint -d '{"id":"IDVALUE","name":"Mike"}' | grep }| python -mjson.tool




//===
man curl:
-H/--header

(HTTP) Extra header to use when getting a web page. You may specify
any number of extra headers. Note that if you should add a custom
header that has the



//===
curl --header "X-MyHeader: 123" www.google.com


Git Bash font size


//=== http://stackoverflow.com/questions/2621281/unable-to-modify-git-bash-windows-shortcut
on win 8,
change the target of the shortcut for "Git Bash" from

"C:\Program Files (x86)\Git\bin\sh.exe" --login -i

to

"C:\Program Files (x86)\Git\Git Bash.vbs"




phantomjs user-agent setting

//=== http://phantomjs.org/api/webpage/property/settings.html

var webPage = require('webpage');
var page = webPage.create();

page.settings.userAgent =
'Mozilla/5.0 (Windows NT 6.1; WOW64)
AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/37.0.2062.120 Safari/537.36';


or

page.settings.userAgent = "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)";

js code re-format, beautify


//== http://stackoverflow.com/questions/18985/what-javascript-code-beautifier-can-i-use-via-the-command-line-on-windows-and-li


//=== http://jsbeautifier.org/
* jsbeautifier source code is free and open, available on the github under MIT licence,
https://github.com/beautify-web/js-beautify

* jsbeautifier also has a command-line version (need V8?),
python library and a node package as well.




//=== uglify-js under nodejs

$ sudo npm install -g uglify-js

$ uglifyjs foo.js --beautify --output cutefoo.js

uglify uses spaces for indentation by default
to convert the 4-space-indentation to tab
$ unexpand --tabs=4 cutefoo.js > cuterfoo.js



//=== uncrustify?
http://uncrustify.sourceforge.net/



2015年4月15日 星期三

phantomjs 2.x build from source


//=== http://phantomjs.org/build.html

* install build tools on ubuntu 14.04
$ sudo apt-get install build-essential g++ flex bison gperf ruby perl \
libsqlite3-dev libfontconfig1-dev libicu-dev libfreetype6 libssl-dev \
libpng-dev libjpeg-dev

??? It is recommend also to install ttf-mscorefonts-installer package.

* download phantomjs-2.x source code or git clone phantomjs.git
$ wget https://bitbucket.org/ariya/phantomjs/downloads/phantomjs-2.0.0-source.zip
$ unzip phantomjs-2.0.0-source.zip -d workfolder

or

$ cd workfolder
workfolder$ git clone git://github.com/ariya/phantomjs.git


//===
workfolder$ cd phantomjs
workfolder/phantomjs$ git checkout 2.0
workfolder/phantomjs$ ./build.sh --jobs 1


*** build.sh by default will launch parallel compile jobs depending on the available CPU cores,
e.g. 4 jobs on a modern hyperthreaded dual-core processor.
but
when building on a virtual machine/server or other limited environment,
reduce the jobs by passing a number,
e.g ./build.sh --jobs 1 to set only one compile job at a time.



RDS amazon 12-month free-tier database


//=== http://aws.amazon.com/rds/faqs/

* 750 hours per month (hour sum of all instances)
* 20 GB of database storage, 10 million I/Os and 20 GB of backup storage for free per month.
* can run more than one Single-AZ Micro DB instance simultaneously

but

*** any use exceeding the free quota will be charged at standard Amazon RDS prices.
Single-AZ db.t2.micro $0.017/hour [ http://aws.amazon.com/rds/pricing/ ]




"""...
Free Tier
Q: What does the AWS Free Tier for Amazon RDS offer?

The AWS Free Tier for Amazon RDS offer provides free use of Single-AZ Micro DB instances running MySQL, PostgreSQL, Oracle ("Bring-Your-Own-License (BYOL)" licensing model) and SQL Server Express Edition. The free usage tier is capped at 750 instance hours per month. Customers also receive 20 GB of database storage, 10 million I/Os and 20 GB of backup storage for free per month.

Q: For what time period will the AWS Free Tier for Amazon RDS be available to me?

New AWS accounts receive 12 months of AWS Free Tier access. Please see the AWS Free Tier FAQs for more information.

Q: Can I run more than one DB instance under the AWS Free Usage Tier for Amazon RDS?

Yes. You can run more than one Single-AZ Micro DB instance simultaneously and be eligible for usage counted under the AWS Free Tier for Amazon RDS. However, any use exceeding 750 instance hours, across all Amazon RDS Single-AZ Micro DB instances, across all eligible database engines and regions, will be billed at standard Amazon RDS prices.

For example: if you run two Single-AZ Micro DB instances for 400 hours each in a single month, you will accumulate 800 instance hours of usage, of which 750 hours will be free. You will be billed for the remaining 50 hours at the standard Amazon RDS price.

Q: Do I have access to 750 instance hours each of the MySQL, PostgreSQL, Oracle and SQL Server Micro DB instances under the AWS Free Tier?

No. A customer with access to the AWS Free Tier can use up to 750 instance hours of Micro instances running either MySQL, PostgreSQL, Oracle or SQL Server Express Edition. Any use exceeding 750 instance hours, across all Amazon RDS Single-AZ Micro DB instances, across all eligible database engines and regions, will be billed at standard Amazon RDS prices.

Q: Is the AWS Free Tier for Amazon RDS available in all AWS Regions?

The AWS Free Tier for Amazon RDS is available in all AWS Regions except GovCloud (US).

Q: How am I billed when my instance-hour usage exceeds the Free Tier benefit?

You are billed at standard Amazon RDS prices for instance hours beyond what the Free Tier provides. See the Amazon RDS pricing page for details.
..."""


2015年4月13日 星期一

docker on ubuntu on hyper-V


//=== On ubuntu_x64 on hyperV,
when using the binary docker downloaded by
$ wget https://get.docker.com/builds/Linux/x86_64/docker-latest -O docker
$ chmod +x docker
[ http://it.3hd.me/2015/03/docker-install-by-binary.html ]


$ ~/docker ps
FATA[0000] Get http:///var/run/docker.sock/v1.17/containers/json:
dial unix /var/run/docker.sock: no such file or directory.
Are you trying to connect to a TLS-enabled daemon without TLS?

-->
$ sudo ~/docker -d &
$ ~/docker ps
$ ~/docker pull username/img:tag



//=== http://stackoverflow.com/questions/27528337/am-i-trying-to-connect-to-a-tls-enabled-daemon-without-tls

* docker acts both as server and client.
$ docker version
is command internal to the docker executable and not to the daemon/service running.

$ docker images
$ docker ps
$ docker pull centos
are commands which send queries to the docker daemon/service running.


* Docker by default supports TLS connection to its daemon/service.

[Q] when is TLS connectivity not required?
Only if the user you are logged in as is part of user group docker
or you have used sudo before the command
e.g. $ sudo docker images does not require TLS connectivity.



//=== http://itproctology.blogspot.com/2015/02/docker-on-ubuntu-on-hyper-v-2012-r2.html

*** steps to install docker on ubuntu on hyperv :
[use the lxc-docker package for ubuntu from docker.io]

$ sudo apt-get update

Upgrade the components (aka patch the OS)
$ sudo apt-get upgrade -y

Add Docker gpg key (that is 'qO' not 'qZero')
$ sudo sh -c "wget -qO- https://get.docker.io/gpg | apt-key add -"

Update the app list
$ sudo sh -c "echo deb http://get.docker.io/ubuntu docker main\ >> /etc/apt/sources.list.d/docker.list"

Update the local apt repository after adding the docker reference
$ sudo apt-get update

Install (latest) Docker (on 12/15/14 this is 1.4.0)
$ sudo apt-get install lxc-docker -y

$ docker version
$ docker pull username/imgname:tagname

$ ps aux| grep docker

--> service name is docker rather than "docker.io"
$ sudo service docker stop
$ sudo service docker start



//=== http://www.petri.com/microsoft-announces-hyper-v-containers.htm


//=== boot_to_docker iso for hyperV
http://blogs.msdn.com/b/scicoria/archive/2014/10/09/getting-docker-running-on-hyper-v-8-1-2012-r2.aspx




docker commit a running container to a new image


http://stackoverflow.com/questions/25211198/docker-how-to-change-repository-name-or-rename-image

# mycontainer1 is the container name
# myimg1 is the image name
$ docker run --name="mycontainer1" myimg1
...
... make some changes to container
...

# commit changes and save to the new image name (image tag)
$ docker commit mycontainer1 username/new_img2:new_tag

$ docker rm -f mycontainer1


# push image to dockerHub
$ docker push username/new_img2:new_tag

2015年4月9日 星期四

myql rm user


//=== https://dev.mysql.com/doc/refman/5.0/en/drop-user.html
* revoke privileges first
* then drop user ...

under mysql>

REVOKE INSERT ON *.* FROM 'jeffrey'@'localhost';

REVOKE all ON *.* FROM 'myrepl001'@'*';
drop user 'myrepl001'@'*';

REVOKE all ON *.* FROM 'myrepl7'@'172.31.*';
drop user 'myrepl7'@'172.31.*';

REVOKE all ON *.* FROM 'root'@'::1';
drop user 'root'@'::1';

REVOKE all ON *.* FROM 'root'@'e80eb84c57ae';
drop user 'root'@'e80eb84c57ae';

REVOKE all ON *.* FROM 'root'@'127.0.0.1';
drop user 'root'@'127.0.0.1';


//===
* change password for the current login user
mysql> set password= password('plaintext');

* change password for a certain user
(login as admin or root)

mysql> set password for 'u13'@'host26' = password('plaintext');


2015年4月7日 星期二

mysql , change master to, RDS limitation


//=== https://dev.mysql.com/doc/refman/5.0/en/change-master-to.html
//=== http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql_rds_set_external_master.html


*** "change master to" command is run on slave side

*** "change master to" is not allowed on RDS,
--> ERROR 1227 (42000): Access denied; you need (at least one of) the SUPER privilege(s) for this operation
--> use "CALL mysql.rds_set_external_master" instead
[""" Warning
Do not use mysql.rds_set_external_master to manage replication between two Amazon RDS DB instances.
Use it only when replicating with an instance of MySQL running external to RDS..."""]


mysql> STOP SLAVE; -- if replication was running
RDS mysql> CALL mysql.rds_stop_replication;

mysql> CHANGE MASTER TO
MASTER_HOST= 'master-ip',
MASTER_PORT=3306,
MASTER_USER='u13_repl',
MASTER_PASSWORD='u13_passwd',
MASTER_LOG_FILE='binlog.0000x',
MASTER_LOG_POS=pos

RDS mysql> CALL mysql.rds_set_external_master ('master-ip', 3306, 'u13_repl', 'u13_passwd', binlog-fname, binlog-pos, 0);


mysql> START SLAVE; -- if you want to restart replication
RDS mysql> CALL mysql.rds_start_replication;



RDS mysql> CALL mysql.rds_reset_external_master;
[clear config for master host on RDS slave]
+----------------------+
| message |
+----------------------+
| Slave has been reset |
+----------------------+
1 row in set (0.06 sec)

Query OK, 0 rows affected (0.06 sec)



2015年4月6日 星期一

mysql replication set up


mysql replication : slave(read replica) read data from master (source instance)
[ user@slave connect to master to read data
instead of
user@master connect to slave to write data ]


//=== https://dev.mysql.com/doc/refman/5.5/en/replication-howto.html

0. On the master, you must enable binary logging and configure a unique server ID.
[ /etc/mysql/my.cnf
bind-address = 172.31.42.230
server-id = 1
#log_bin = /var/log/mysql/mysql-bin.log
log_bin= myrds-bin # /var/lib/mysql/myrds-bin.000x
]


1. On each slave to connect to the master, configure each with a unique server ID

2. create a separate user that will be used by your slaves to logon/connect to the master
to read the binary log for replication.


[ex]
on the master, to set up a new user, repl, that can connect for replication "from" any slavehost.myslaves.com
--> @%.myslave.com

on the master
mysql> CREATE USER 'repl'@'%.myslaves.com' IDENTIFIED BY 'slavepass';
mysql> GRANT REPLICATION SLAVE ON *.* TO 'repl'@'%.myslaves.com';



//=== Replication with a MySQL Instance Running External to Amazon RDS
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.External.Repl.html

master instance : source instance, mysql instance external to RDS
slave instance : read replica, RDS mysql instance

0. master -
add binlog_do_db=target_db_name to /etc/mysql/my.cnf
$ sudo service mysql restart

1. master - make the source instance read-only:
mysql> FLUSH TABLES WITH READ LOCK;
mysql> SET GLOBAL read_only= ON;

2. master - to determine the binlog location.
mysql > SHOW MASTER STATUS;

File Position
------------------------------------
myrds-bin.000031 107


3. master - mysqldump to export; slave - mysql to import
mysqldump --databases target_db_name --single-transaction --compress --order-by-primary
–u local_user -plocal_password | mysql --host=slave –-port=3306 –u slave_user_name –pslave_password
[ why need --single-transaction ? ]

4. master - make the source instance writeable again:
mysql> SET GLOBAL read_only = OFF;
mysql> UNLOCK TABLES;


5. slave - To find the IP address of the slave instance ( RDS MySQL instance)
host RDS_MySQL_DB_host_name

master- allow the slave IP to connect to master instance(external mysql instance) for replication read.

6. master- create user 'repl_user' for replication
mysql> CREATE USER 'repl_user'@'%.myslaves.com' IDENTIFIED BY 'slavepass';
mysql> GRANT REPLICATION CLIENT,REPLICATION SLAVE ON *.* TO 'repl_user'@'%.myslaves.com';



7. slave -
mysql > stop slave
mysql > change master to ...[set up master info so slave can connect to master properly]
mysql > start slave


*** since RDS not allow stop slave/start slave/change master to
-->
mysql> CALL mysql.rds_stop_replication;

mysql> CALL mysql.rds_set_external_master ('master.masters.com', 3306,
'repl_user', 'password', 'myrds-bin.00003', 107, 0);

mysql> CALL mysql.rds_start_replication





//=== http://dba.stackexchange.com/questions/76194/add-database-to-existing-replication-which-uses-binlog-do-db
[ex]
to copy a new database, database4, onto a slave that already has 3 databases replicating,
the problem then is that replication will need to be paused whilst you do this.


1) MASTER - restart the master with new binlog-do-db ( remove all binlog-do-db in order to start populating the binary log with database4's data)

2) SLAVE - Check and wait for the slave Database's replication up to date (SHOW SLAVE STATUS > Seconds_Behind_master).

3) MASTER - mysql> FLUSH TABLES WITH READ LOCK; (to prevent new data from entering the master)

USE database4;
FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;
[write down the binlog filename and position]

4) SLAVE - Check replication is fully caught up and no data is replicating
(read_master_log_pos and exec_master_log_pos should be the same and not changing in "show slave status;")

5) SLAVE - STOP SLAVE; (to stop any data replicating into slave)

6) Master - Run MySQLDUMP with --single-transaction option

7) Master - Once MySQLDUMP has started running unlock the master Database by
mysql> UNLOCK TABLES; ( hopefully keeping downtime to a minimum )

* MySQLDUMP may lock the database4 schema while it works depending on your system

8) Slave - Once MySQLDUMP has completed, import it into the slave Database

9) Slave - check the newly imported database4 looks correct

10)Slave - adding replicate-do-db=database4 to the my.cnf file and restart the slave instance.



//===
[Q] how to set replicate_do_db on rds mysql slave?
--> seems not supported yet by RDS.

http://stackoverflow.com/questions/18370746/can-you-replicate-a-specific-database-or-table-using-amazons-rds

"""...
By default RDS simply just replicates the entire master's databases over to the slave's.
But we only want to do specific tables.
...

These settings exists in MySQL, I do not see them on the custom parameter settings for RDS ...

--replicate-ignore-db=db_name
--replicate-ignore-table=db_name.tbl_name
..."""



//=== reference links
https://dev.mysql.com/doc/refman/5.5/en/replication-howto.html

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Importing.External.Repl.html

http://dba.stackexchange.com/questions/76194/add-database-to-existing-replication-which-uses-binlog-do-db

http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/mysql_rds_set_external_master.html



mysql read lock, read_only


[Q] which variable in mysql corresponds to flush table read lock

mysql> show variables like '%read_only%';
mysql> show variables like '%lock%';

innodb_print_all_deadlocks | OFF |
| innodb_table_locks | ON |
| key_cache_block_size | 1024 |
| lock_wait_timeout | 31536000 |
| locked_in_memory | OFF |
| max_write_lock_count | 18446744073709551615


//=== http://mysqlha.blogspot.com/2008/07/what-exactly-does-flush-tables-with.html
FLUSH TABLES WITH READ LOCK

"""...
The manual describes it with this:
Closes all open tables and locks all tables for all databases with a read lock until you explicitly release the lock by executing UNLOCK TABLES. This is very convenient way to get backups if you have a filesystem such as Veritas that can take snapshots in time.
FLUSH TABLES WITH READ LOCK acquires a global read lock and not table locks, so it is not subject to the same behavior as LOCK TABLES and UNLOCK TABLES with respect to table locking and implicit commits:

...
If long-running statements were started prior to step 2, then the FLUSH TABLES WITH READ LOCK command will block until they complete. This is a bad state to get stuck in as the server is in read-only mode at this point and this statement is frequently run on a primary.
It might be a good idea to kill long-running statements prior to running the FLUSH command or when it gets stuck.

..."""



//=== pattern for lock and unlock
put your whole database to read only mode by this commands:

mysql> FLUSH TABLES WITH READ LOCK;
mysql> SET GLOBAL read_only = 1;

backup data,

then restore to normal mode with:

mysql> SET GLOBAL read_only = 0;
mysql> UNLOCK TABLES;



//=== 1,0 --> ON, OFF

mysql> FLUSH TABLES WITH READ LOCK;
mysql> SET GLOBAL read_only= ON;

mysqldump to export data ...

mysql> SET GLOBAL read_only = OFF;
mysql> UNLOCK TABLES;




rds-modify-db-instance options


//=== http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference/CLIReference-cmd-ModifyDBInstance.html

rds-modify-db-instance options

...
[-r (--backup-retention-period) value ]
[-w (--preferred-maintenance-window) value ]
[-b (--preferred-backup-window) value ]
[-tca (--tde-credential-arn) value ]
[-tcp (--tde-credential-password) value ]
[ -cert (--certificate-identifier) value ]
...

-b
Constraints: Must be in the format hh24:mi-hh24:mi.
Must be at least 30 minutes.
Times should be 24-hour Universal Time Coordinated (UTC).


-r
Constraints:
Must be a value from 0 to 35.
Cannot be set to 0 if the DB instance is a source to Read Replicas


[ex] set the backup window to 03:00 to 03:30
PROMPT> rds-modify-db-instance mydbinstance --preferred-backup-window 03:00-03:30


[ex] set the "weekly" preferred maintenance window
PROMPT> rds-modify-db-instance mydbinstance --preferred-maintenance-window Tue:04:00-Tue:04:30

[ex] set the retention period to 14 days
PROMPT> rds-modify-db-instance mydbinstance --backup-retention-period 14 --apply-immediately



rds mysql auto backup


//=== http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.BackingUpAndRestoringAmazonRDSInstances.html

* RDS automatic backup for the MySQL DB engine only support InnoDB storage engine;
for MyISAM storage engine, restoring from backups may lead to unreliable behavior;
you can attempt to manually repair tables after a crash by REPAIR command
--> http://dev.mysql.com/doc/refman/5.5/en/repair-table.html

* daily automated backup + real-time db change logs(transaction logs) archive
--> point-in-time recovery up to the LatestRestorableTime (typically up to the last five minutes)


*** backup window, retention period
A "daily" automated backup occurs during a configurable period of time, "the preferred backup window".
Backups created during the backup window are retained for a configurable number of days "the backup retention period".

During the backup window, storage I/O may be suspended while your data is being backed up and you may experience longer latency.

This I/O suspension typically lasts for the duration of the snapshot. This period of I/O suspension is shorter for Multi-AZ DB deployments, since the backup is taken from the standby.

RDS assigns a default 30-minute backup window which is selected at random from an 8-hour block of time per region.

* The backup window cannot overlap with the weekly maintenance window for the DB instance.

[ex] set the backup window to 03:00 to 03:30
PROMPT> rds-modify-db-instance mydbinstance --preferred-backup-window 03:00-03:30




//===
If you don't set the backup retention period, RDS uses a default period retention period of one day. Y
valid values are 0 (for no backup retention) to a maximum of 35 days.

//=== http://aws.amazon.com/about-aws/whats-new/2012/03/19/amazon-rds-increases-backup-retention-period/

the maximum retention period for automated backups has been increased from eight days to thirty five days. This new limit will enable you to store more than a month of backups.


//=== disable auto backup
* to temporarily disable automated backups in certain situations; for example, while loading large amounts of data.
by setting the backup retention period to 0.

PROMPT> rds-modify-db-instance mydbinstance --backup-retention-period 0 --apply-immediately

PROMPT> rds-describe-db-instances mydbinstance --headers


//=== enable auto backup
* enable auto backup by setting the backup retention period to a non-zero value.

[ex] Set the backup retention period to 3 days.
PROMPT> rds-modify-db-instance mydbinstance --backup-retention-period 3 --apply-immediately




rds mysql stored procedures


http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.MySQL.SQLRef.html

"""...
The following system stored procedures are supported for Amazon RDS DB instances running MySQL to manage replication.

mysql.rds_set_external_master

mysql.rds_reset_external_master

mysql.rds_start_replication

mysql.rds_stop_replication

mysql_rds_skip_repl_error

mysql.rds_next_master_log

mysql.rds_innodb_buffer_pool_dump_now

mysql.rds_innodb_buffer_pool_load_now

mysql.rds_innodb_buffer_pool_load_abort
..."""


mysql binlog_do_db, replicate_do_db troubles


* master-side filtering:
binlog-do-db, binlog-ignore-db,

* slave-side filtering:
replicate-do-db, replicate-ignore-db
replicate-do-table, replicate-wild-do-table



//=== https://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.html
https://dev.mysql.com/doc/refman/5.5/en/replication-options-binary-log.html

binlog-do-db + Statement-based logging
binlog-do-db + Row-based logging

binlog-format : statement-based, row-based, mixed

"""...
...
--binlog-do-db=db_name

This option affects binary logging in a manner similar to the way that --replicate-do-db affects replication.

The effects of this option depend on whether the statement-based or row-based logging format is in use, in the same way that the effects of --replicate-do-db depend on whether statement-based or row-based replication is in use

...
the format used to log a given statement may not necessarily be the same as that indicated by the value of binlog_format. For example, DDL statements such as CREATE TABLE and ALTER TABLE are always logged as statements,



...
Statement-based logging. Only those statements are written to the binary log where the default database (that is, the one selected by USE) is db_name. To specify more than one database, use this option multiple times, once for each database; however, doing so does not cause cross-database statements such as UPDATE some_db.some_table SET foo='bar' to be logged while a different database (or no database) is selected.

Warning
To specify multiple databases you must use multiple instances of this option. Because database names can contain commas, the list will be treated as the name of a single database if you supply a comma-separated list.

...
Row-based logging. Logging is restricted to database db_name. Only changes to tables belonging to db_name are logged; the default database has no effect on this. Suppose that the server is started with --binlog-do-db=sales and row-based logging is in effect, and then the following statements are executed:

USE prices;
UPDATE sales.february SET amount=amount+100;
The changes to the february table in the sales database are logged in accordance with the UPDATE statement; this occurs whether or not the USE statement was issued.



..."""




//===
http://www.percona.com/blog/2009/05/14/why-mysqls-binlog-do-db-option-is-dangerous/

"""...
... filtering is not based on the contents of the query — it is based on what database you USE.

The safer alternative is to configure filters on the slave, with options that actually operate on the tables mentioned in the query itself. These are replicate-wild-* options. For example, the safer way to avoid replicating data in the garbage database is to configure replicate-wild-ignore-table=garbage.%.


... there are flow charts at http://dev.mysql.com/doc/refman/5.1/en/replication-rules-db-options.html


..."""

--> http://dev.mysql.com/doc/refman/5.5/en/replication-rules-db-options.html

2015年4月5日 星期日

rds mysql export data



*** For more information about Amazon RDS data transfer pricing, go to Amazon Relational Database Service Pricing.


http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Exporting.NonRDSRepl.html
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/MySQL.Procedural.Exporting.NonRDSRepl.html

3 mysql instances are involved:
* source instance [master]
* middle instance [slave, read replica as proxy]
* dest instance [slave of slave, also a read replica]

why is the middle instance needed ?


//=== export data to external mysql db
* source instance: The RDS MySQL DB instance must be running version 5.6.13 or later.
* dest instance: The MySQL instance external to Amazon RDS must be running the same version as the Amazon RDS instance, or higher.


* source rds instance is configured as master ["replication source"]

* the external mysql db instance is configured as slave ["read replica"]
[Replication Slave Configuration]

Configure ingress(inbound) rules on the system running the "source instance" to allow the "dest instance" to connect during replication.


After replication on the dest instance has caught up with the source instance,
use the MySQL> STOP SLAVE command to terminate replication.



*** Use the mysql.rds_set_configuration stored procedure on the "source instance"?
to set the binary log retention period long enough that the binary logs are not purged during the export.
For more information, see Accessing MySQL 5.6 Binary Logs.

*** To further ensure that the binary logs of the source instance are not purged,
create an Amazon RDS Read Replica ["the middle instance"] from the source instance. For more information, see Creating a Read Replica.

[???]
After the Amazon RDS Read Replica ["the middle instance"] has been created,
call the "mysql.rds_stop_replication" stored procedure to stop the replication process.
The source instance will no longer purge its binary log files,
so they will be available for the replication process.




//=== create a read replica
http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Create

You can create a Read Replica from an existing MySQL or PostgreSQL DB instance using the
* AWS Management Console,
* CLI, or
* API


* You create a Read Replica by specifying the SourceDBInstanceIdentifier

* when you initiate the creation of a Read Replica,
Amazon RDS takes a DB snapshot of your source DB instance and begins replication.
--> you will experience a brief I/O suspension on your source DB instance as the DB snapshot occurs.
The I/O suspension typically lasts about one minute and can be avoided
if the source DB instance is a Multi-AZ deployment


An active, long-running transaction can slow the process of creating the Read Replica,
so wait for long-running transactions to complete before creating a Read Replica.


*** When creating a Read Replica, there are a few things to consider.

* First, you must enable automatic backups on the "source instance" by setting
the backup retention period to a value other than 0.

This requirement also applies to a Read Replica that is the source DB instance for another Read Replica.

For MySQL DB instances, automatic backups are supported only for Read Replicas running MySQL 5.6
but not for MySQL versions 5.1 or 5.5.

??? To enable automatic backups on the "Read Replica" ( the middle instance? ),
first create the Read Replica, then modify the Read Replica to enable automatic backups.


* To create a Read Replica from a source MySQL instance
Use the rds-create-db-instance-read-replica command.

-s === --source-db-instance-identifier

http://docs.aws.amazon.com/AmazonRDS/latest/CommandLineReference//CLIReference-cmd-CreateDBInstanceReadReplica.html
[ex]
mysql> rds-create-db-instance-read-replica SimCoProd01Replica01 -s SimcoProd01

DBINSTANCE simcoprod01replica01 db.m1.large mysql 10 master creating
us-east-1b 0 n 5.1.50 simcoprod01
SECGROUP default active
PARAMGRP default.mysql5.1 in-sync



[ex]
mysql> rds-create-db-instance-read-replica SimCoProd01Replica02 -s SimCoProd01 -z us-east-1a

DBINSTANCE simcoprod01replica02 db.m1.large mysql 10 master creating
us-east-1a 0 n 5.1.50 simcoprod01
SECGROUP default active
PARAMGRP default.mysql5.1 in-sync



//===
You can create a MySQL Read Replica in a different region than the source DB instance to improve your
disaster recovery capabilities, scale read operations into a region closer to end users,
or make it easier to migrate from a data center in one region to a data center in another region


*** run the create Read Replica command in the region where you want the Read Replica,
and specify the Amazon Resource Name (ARN) of the source DB instance.



//=== Use the mysqldump utility to create a snapshot,
which copies the data from Amazon RDS to your local client computer.
Then run another utility to load the data into the MySQL instance running external to RDS.

*** to run mysqldump on a client, and then pipe the dump into the mysql client utility, which loads the data into the external MySQL instance.

mysqldump -h RDS instance endpoint -u user -p password --port=3306 --single-transaction --routines --triggers --databases database database2 --compress --compact | mysql -h MySQL host -u master user -p password --port 3306






//=== create read replica for MyISAM instance

If your MySQL DB instance uses a non-transactional engine such as MyISAM,
you will need to perform the following steps to successfully set up your Read Replica.

These steps are required to ensure that the Read Replica has a consistent copy of your data.
Note that these steps are not required if all of your tables use a transactional engine such as InnoDB.

* Stop all DML and DDL operations on non-transactional tables in the source DB instance and
wait for them to complete. SELECT statements can continue running.

* Flush and lock the tables in the source DB instance.

* Create the Read Replica using one of the methods in the following sections.

* Check the progress of the Read Replica creation using, for example, the DescribeDBInstances API operation.
Once the Read Replica is available, unlock the tables of the source DB instance and resume normal database operations.






2015年4月4日 星期六

mysql sql-mode


//=== http://dev.mysql.com/doc/refman/5.1/en/replication-features-sql-mode.html

* For replication, you should always use the same server SQL mode on the master and on the slave.

* The default SQL mode is empty (no modes set).

* to set the SQL-mode when starting mysqld,
use the --sql-mode="mode1, mode2, ..." option on the command line, or
sql-mode="mode1, mode2, mode3, ..." in an option file such as my.cnf (Unix operating systems) or my.ini (Windows).

* To clear the SQL-mode explicitly, set it to an empty string using --sql-mode="" on the command line, or
sql-mode="" in an option file.


* To change the SQL mode at runtime, set the global or session sql_mode

SET GLOBAL sql_mode = 'modes';
SET SESSION sql_mode = 'modes';


* To determine the current global or session sql_mode value,

SELECT @@GLOBAL.sql_mode;
SELECT @@SESSION.sql_mode;


***
* Ansi
* STRICT_TRANS_TABLES
* Traditional


“strict mode”
--> means a mode with either or both STRICT_TRANS_TABLES or STRICT_ALL_TABLES enabled.


//=== http://dev.mysql.com/doc/refman/5.1/en/sql-mode.html#sql-mode-setting
Full List of SQL Modes

...


mysql binlog modes (formats)



* 3 modes(formats) : row, statement, mixed
* statement-based or row-based replication or mixed


//=== http://dev.mysql.com/doc/refman/5.1/en/binary-log-setting.html

* Globally change binlog format
The logging format also can be switched at runtime. To specify the format globally for all clients,
set the global value of the binlog_format system variable:

mysql> SET GLOBAL binlog_format = 'STATEMENT';
mysql> SET GLOBAL binlog_format = 'ROW';
mysql> SET GLOBAL binlog_format = 'MIXED';


* per-session change binlog format
An individual client can control the logging format for its own statements
by setting the session value of binlog_format:

mysql> SET SESSION binlog_format = 'STATEMENT';
mysql> SET SESSION binlog_format = 'ROW';
mysql> SET SESSION binlog_format = 'MIXED';



*** Each MySQL Server can set its own and only its own binary logging format
(no matter whether binlog_format is set with global or session scope).
--> changing the logging format on a replication master does not cause a slave to change
its logging format to match

--> Error executing row event: 'Cannot execute statement: impossible to write to binary log since statement is in row format and BINLOG_FORMAT = STATEMENT.'


*** If you are using InnoDB tables and the transaction isolation level is READ COMMITTED or READ UNCOMMITTED,
only row-based logging can be used.


//=== There are exceptions when you cannot switch the replication format at runtime:

* From within a stored function or a trigger

* If the NDBCLUSTER storage engine is enabled

* If the session is currently in row-based replication mode and has open temporary tables




2015年4月2日 星期四

mysql comment symbols


mysql comment symbols
#
--
/* */


//=== https://dev.mysql.com/doc/refman/5.1/en/comments.html
"""...
9.6 Comment Syntax

...
From a “#” character to the end of the line.

From a “-- ” sequence to the end of the line. In MySQL,
the “-- ” (double-dash) comment style requires the second dash to be followed
by at least one whitespace or control character (such as a space, tab, newline, and so on).
This syntax differs slightly from standard SQL comment syntax, ...

From a /* sequence to the following */ sequence, as in the C programming language. ...

The following example demonstrates all three comment styles:

mysql> SELECT 1+1; # This comment continues to the end of line
mysql> SELECT 1+1; -- This comment continues to the end of line
mysql> SELECT 1 /* this is an in-line comment */ + 1;
mysql> SELECT 1+
/*
this is a
multiple-line comment
*/
1;



2015年4月1日 星期三

mysql installation on ubuntu


//=== https://www.linode.com/docs/databases/mysql/using-mysql-relational-databases-on-ubuntu-14-04-lts-trusty-tahr

$ sudo apt-get update
$ sudo apt-get install mysql-server
[both mysql client and server will be installed]


*** After installing MySQL, it’s recommended that you run mysql_secure_installation to help secure MySQL.

$ mysql_secure_installation

Change the root password? [Y/n] n
... skipping.

Remove anonymous users? [Y/n] Y
... Success!

Normally, root should only be allowed to connect from 'localhost'.
...
Disallow root login remotely? [Y/n] Y
... Success!

...
Remove test database and access to it? [Y/n] Y
- Dropping test database...
ERROR 1008 (HY000) at line 1: Can't drop database 'test'; database doesn't exist
... Failed! Not critical, keep moving...
- Removing privileges on test database...
... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] Y
... Success!

Cleaning up...

//=== to sum up what mysql_secure_installation have done
0. change root password [must if empty passwd is chosen during installation ]
1. remove anonymous users
2. disallow root login from non-localhost machine
3. remove test database
4. flush privileges

then restart mysql server
/etc/init.d/mysql restart


*** if you’ve forgotten your root password, use dpke-reconfigure to change that password:
$ sudo dpkg-reconfigure mysql-server-5.5


chk mysql server version


//=== http://stackoverflow.com/questions/8987679/how-to-know-the-version-of-the-mysql-database

[Q] how to chk mysql server version on ubuntu?

$ mysql --version
$ dpkg -l 'mysql-server*'


//=== after login mysql server

mysql> SELECT VERSION();
mysql> SHOW VARIABLES LIKE "%version%";


* RDS
mysql> show variables like '%version%';

+-------------------------+------------------------------+
| Variable_name | Value |
+-------------------------+------------------------------+
| innodb_version | 5.6.19 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 5.6.19-log |
| version_comment | MySQL Community Server (GPL) |
| version_compile_machine | x86_64 |
| version_compile_os | Linux |
+-------------------------+------------------------------+
7 rows in set (0.00 sec)



*ec2
mysql> SHOW VARIABLES LIKE "%version%";

+-------------------------+-------------------------+
| Variable_name | Value |
+-------------------------+-------------------------+
| innodb_version | 5.5.41 |
| protocol_version | 10 |
| slave_type_conversions | |
| version | 5.5.41-0ubuntu0.14.04.1 |
| version_comment | (Ubuntu) |
| version_compile_machine | x86_64 |
| version_compile_os | debian-linux-gnu |
+-------------------------+-------------------------+
7 rows in set (0.00 sec)




mysql usage examples


$ mysql [exec mysql client]
$ mysql db_name

[ERROR 2002 (HY000)] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock'
-->
$ mysql -uu13 -p [u13 is just an example username]
$ mysql -uu13 -p db_name

$ mysql -u u13 -pu13_passwd db_name
[no space allowed between -p and u13_passwd]

$ mysql --user=u13 --password=u13_passwd db_name

* chk if mysql is running
$ mysqladmin -u root -p status
$ mysqladmin -u root -p -h localhost status

*** if still error,
--> try using the local ip address (127.0.0.1) instead of 'localhost'

$ mysqladmin -u root -p -h 127.0.0.1 status

$ mysql -h 127.0.0.1 -uroot -p -t < employees.sql [import from file employees.sql to ?] """... --table, -t Display output in table format. This is the default for interactive use, but can be used to produce table output in batch mode. ...""" $ mysql -u u13 -pu13_passwd -h mysql_server_host db_name [err] mysql -h xxx.amazonaws.com:3306 [correct] mysql -h xxx.amazonaws.com -P 3306 //=== after login mysql server mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| employees |
| mysql |
| performance_schema |
+--------------------+
4 rows in set (0.00 sec)

mysql> use employees
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> show tables;
+---------------------+
| Tables_in_employees |
+---------------------+
| departments |
| dept_emp |
| dept_manager |
| employees |
| salaries |
| titles |
+---------------------+
6 rows in set (0.00 sec)

mysql> exit
~$