MySQL error 1449: The user specified as a definer does not exist

I faced this error when exporting the database from one server to other server, as the user doesn’t exist. So I changed the incorrect username into right one as given below.

W.R.T http://stackoverflow.com/questions/10169960/mysql-error-1449-the-user-specified-as-a-definer-does-not-exist

Execute this query to get the list of queries to be executed.


SELECT CONCAT("ALTER DEFINER=`youruser`@`host` VIEW ",
table_name, " AS ", view_definition, ";")
FROM information_schema.views
WHERE table_schema='your-database-name';

It would give list of queries as given below

ALTER DEFINER='jessica'@'%' VIEW vw_audit_log AS select `a`.`ID` AS `id`,`u`.`USER_NAME` AS `user_name`,`a`.`LOG_TYPE` AS `log_type`,`a`.`LOG_TIME` AS `log_time`,`a`.`MESSAGE` AS `message`,`a`.`STATUS` AS `status` from (`your-database-name`.`user_info` `u` join `your-database-name`.`audit_log` `a`) where (`u`.`ID` = `a`.`USER_ID`) order by `a`.`ID` desc;

ALTER DEFINER='jessica'@'%' VIEW vw_user_role AS select `ur`.`NAME` AS `ROLE_NAME`,`ur`.`EMAIL_PERMISSION` AS `EMAIL_PERMISSION`,`urm`.`user_id` AS `USER_ID`,
`urm`.`role_id` AS `ROLE_ID` from (`your-database-name`.`user_role` `ur` join `your-database-name`.`user_role_mapping` `urm`) where (`ur`.`ID` = `urm`.`role_id`);

ALTER DEFINER='jessica'@'%' VIEW vw_user_role_mapping AS select `ur`.`ROLE_NAME` AS `ROLE_NAME`,`ur`.`EMAIL_PERMISSION` AS `EMAIL_PERMISSION`,`ur`.`USER_ID` AS `USER_ID`,`ur`.`ROLE_ID` AS `ROLE_ID`,`ui`.`USER_NAME` AS `USER_NAME`,`ui`.`PASSWORD` AS `PASSWORD`,`ui`.`ENABLED` AS `ENABLED` from (`your-database-name`.`vw_user_role` `ur` join `your-database-name`.`user_info` `ui`) where (`ur`.`USER_ID` = `ui`.`ID`);

After executing this queries, the problem was resolved.

img_1522

Export and Import with MySQLDump

Here is the syntax.

To export the database –


C:\xampp7\mysql\bin>mysqldump.exe --databases mydatabase --user myuser --password >mydatabase.dump.sql
Enter password: ************

To import the dump to database –

Creation of the database in other server:


MariaDB [(none)]> create database mydatabase;
Query OK, 1 row affected (0.00 sec)

From command prompt, let’s import the dump


D:\Softwares\xampp\mysql\bin>mysql -uroot -p mydatabase <D:\gandhari\documents\projects\jessica\mydatabase.dump.sql
Enter password:

img_1520

Automation & Analysis of RSS & ATOM newsfeeds using Hadoop Ecosystem

This project has been carried out to extract, analyse and display the RSS and ATOM news feeds.

The final goal of this project would be as given below.
1. Providing an automated workflow for the feeds extraction and analysis
2. Providing a browser based user interface for the analysis and reporting

Along with the above main goals, it has been considered to provide a scalable framework to opt-in many other feeds mechanism (social media streaming) and machine learning analytics.

The report shall be downloaded from the below given location

automation-analysis-of-rss-atom-newsfeeds-ramaiah-murugapandian

 

Apache Access Log analysis with Apache Pig

So far I have documented some of the key functions in Apache Pig. Today, Let’s write a simple Pig Latin script to parse and analyse Apache’s access log. Here you go.

$ cat accessLogETL.pig
-- demo script javashine.wordpress.com
-- extract user IPs
access = load '/user/cloudera/pig/access.log' using PigStorage (' ') as (hostIp:chararray, clientId:chararray, userId:chararray, reqTime:chararray, reqTimeZone:chararray, reqMethod:chararray, reqLine:chararray, reqProt:chararray, statusCode:chararray, respLength:int, referrer:chararray, userAgentMozilla:chararray, userAgentPf1:chararray, userAgentPf2:chararray, userAgentPf3:chararray, userAgentRender:chararray, userAgentBrowser:chararray);
hostIpList = FOREACH access GENERATE hostIp;
hostIpList = DISTINCT hostIpList;
STORE hostIpList INTO '/user/cloudera/pig/hostIpList' USING PigStorage('\t');

hostIpUrlList = FOREACH access GENERATE (hostIp,reqTime,reqTimeZone,reqLine);
hostIpUrlList = DISTINCT hostIpUrlList;
STORE hostIpUrlList INTO '/user/cloudera/pig/hostIpUrlList' USING PigStorage('\t');

hostIpBandwidthList = FOREACH access GENERATE (hostIp), respLength;
groupByIp = GROUP hostIpBandwidthList BY hostIp;
bandwidthByIp = FOREACH groupByIp GENERATE hostIpBandwidthList.hostIp, SUM(hostIpBandwidthList.respLength);
STORE bandwidthByIp INTO '/user/cloudera/pig/bandwidthByIp' USING PigStorage('\t');

$ pig -x mapreduce accessLogETL.pig
Job Stats (time in seconds):
JobId   Maps    Reduces MaxMapTime      MinMapTIme      AvgMapTime      MedianMapTime   MaxReduceTime   MinReduceTime   AvgReduceTime   MedianReducetime        Alias   Feature Outputs
job_1485688219066_0027  1       1       12      12      12      12      8       8       8       8       access,hostIpList,hostIpUrlList DISTINCT,MULTI_QUERY    /user/cloudera/pig/hostIpList,/user/cloudera/pig/hostIpUrlList,
job_1485688219066_0028  1       1       7       7       7       7       9       9       9       9       bandwidthByIp,groupByIp,hostIpBandwidthList     GROUP_BY        /user/cloudera/pig/bandwidthByIp,

Input(s):
Successfully read 470749 records (59020755 bytes) from: "/user/cloudera/pig/access.log"

Output(s):
Successfully stored 877 records (12342 bytes) in: "/user/cloudera/pig/hostIpList"
Successfully stored 449192 records (31075055 bytes) in: "/user/cloudera/pig/hostIpUrlList"
Successfully stored 877 records (6729928 bytes) in: "/user/cloudera/pig/bandwidthByIp"

Counters:
Total records written : 450946
Total bytes written : 37817325
Spillable Memory Manager spill count : 7
Total bags proactively spilled: 3
Total records proactively spilled: 213378

Let’s see our results now.

The above script will yield us three output.
First is the list of user IPs accessed the web server.

$ hadoop fs -ls /user/cloudera/pig/hostIpList
Found 2 items
-rw-r--r--   1 cloudera cloudera          0 2017-01-31 12:43 /user/cloudera/pig/hostIpList/_SUCCESS
-rw-r--r--   1 cloudera cloudera      12342 2017-01-31 12:43 /user/cloudera/pig/hostIpList/part-r-00000
[cloudera@quickstart pig]$ hadoop fs -cat /user/cloudera/pig/hostIpList/part-r-00000
::1
10.1.1.5
107.21.1.8
14.134.7.6
37.48.94.6
46.4.90.68
46.4.90.86

The second output will give us the list of user IPs, their access time and accessed URL

$ hadoop fs -ls /user/cloudera/pig/hostIpUrlList
Found 2 items
-rw-r--r--   1 cloudera cloudera          0 2017-01-31 12:43 /user/cloudera/pig/hostIpUrlList/_SUCCESS
-rw-r--r--   1 cloudera cloudera   31075055 2017-01-31 12:43 /user/cloudera/pig/hostIpUrlList/part-r-00000
[cloudera@quickstart pig]$ hadoop fs -cat /user/cloudera/pig/hostIpUrlList/part-r-00000
(10.1.1.5,[22/Jan/2017:17:51:34,+0000],/egcrm)
(10.1.1.5,[22/Jan/2017:17:51:34,+0000],/egcrm2/)
(10.1.1.5,[22/Jan/2017:17:51:34,+0000],/egcrm/helloWorld.action)

And, finally the bandwidth spent for each user IP.

$ hadoop fs -ls /user/cloudera/pig/bandwidthByIp
Found 2 items
-rw-r--r--   1 cloudera cloudera          0 2017-01-31 12:44 /user/cloudera/pig/bandwidthByIp/_SUCCESS
-rw-r--r--   1 cloudera cloudera    6729928 2017-01-31 12:44 /user/cloudera/pig/bandwidthByIp/part-r-00000
$ hadoop fs -cat /user/cloudera/pig/bandwidthByIp/part-r-00000
{(193.138.219.245)}     1313
{(193.138.219.250),(193.138.219.250),(193.138.219.250)} 3939
{(195.154.181.113)}     496
{(195.154.181.168)}     1026

chinese-new-year-2017

Oozie job failure – Error: E0501 : E0501: Could not perform authorization operation, User: hadoop is not allowed to impersonate hadoop

Hi hadoopers,

I’m sorry for resuming the tutorials. I need to complete a project first. Tutorial posts would be resumed after this.

Today, I tried to form a workflow with Oozie. Here is the way I executed it.


hadoop@gandhari:/opt/hadoop-2.6.4/workspace/oozie$ ../../oozie/bin/oozie job --oozie http://gandhari:11000/oozie/ -Doozie.wf.application.path=hdfs://gandhari:9000/user/hadoop/feed/myflow.xml -dryrun

Unfortunately it is broken with the following error.


Error: E0501 : E0501: Could not perform authorization operation, User: hadoop is not allowed to impersonate hadoop

oozie_workflow

hadoop is my OS user. It is the user who is running the Oozie daemon as well. core-site.xml should contain the following entry to proxy this user.


<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>gandhari</value>
</property>
</configuration>

hadoop – OS user name

gandhari – hostname