MySQL error 1449: The user specified as a definer does not exist

I faced this error when exporting the database from one server to other server, as the user doesn’t exist. So I changed the incorrect username into right one as given below.

W.R.T http://stackoverflow.com/questions/10169960/mysql-error-1449-the-user-specified-as-a-definer-does-not-exist

Execute this query to get the list of queries to be executed.


SELECT CONCAT("ALTER DEFINER=`youruser`@`host` VIEW ",
table_name, " AS ", view_definition, ";")
FROM information_schema.views
WHERE table_schema='your-database-name';

It would give list of queries as given below

ALTER DEFINER='jessica'@'%' VIEW vw_audit_log AS select `a`.`ID` AS `id`,`u`.`USER_NAME` AS `user_name`,`a`.`LOG_TYPE` AS `log_type`,`a`.`LOG_TIME` AS `log_time`,`a`.`MESSAGE` AS `message`,`a`.`STATUS` AS `status` from (`your-database-name`.`user_info` `u` join `your-database-name`.`audit_log` `a`) where (`u`.`ID` = `a`.`USER_ID`) order by `a`.`ID` desc;

ALTER DEFINER='jessica'@'%' VIEW vw_user_role AS select `ur`.`NAME` AS `ROLE_NAME`,`ur`.`EMAIL_PERMISSION` AS `EMAIL_PERMISSION`,`urm`.`user_id` AS `USER_ID`,
`urm`.`role_id` AS `ROLE_ID` from (`your-database-name`.`user_role` `ur` join `your-database-name`.`user_role_mapping` `urm`) where (`ur`.`ID` = `urm`.`role_id`);

ALTER DEFINER='jessica'@'%' VIEW vw_user_role_mapping AS select `ur`.`ROLE_NAME` AS `ROLE_NAME`,`ur`.`EMAIL_PERMISSION` AS `EMAIL_PERMISSION`,`ur`.`USER_ID` AS `USER_ID`,`ur`.`ROLE_ID` AS `ROLE_ID`,`ui`.`USER_NAME` AS `USER_NAME`,`ui`.`PASSWORD` AS `PASSWORD`,`ui`.`ENABLED` AS `ENABLED` from (`your-database-name`.`vw_user_role` `ur` join `your-database-name`.`user_info` `ui`) where (`ur`.`USER_ID` = `ui`.`ID`);

After executing this queries, the problem was resolved.

img_1522

Oozie job failure – Error: E0501 : E0501: Could not perform authorization operation, User: hadoop is not allowed to impersonate hadoop

Hi hadoopers,

I’m sorry for resuming the tutorials. I need to complete a project first. Tutorial posts would be resumed after this.

Today, I tried to form a workflow with Oozie. Here is the way I executed it.


hadoop@gandhari:/opt/hadoop-2.6.4/workspace/oozie$ ../../oozie/bin/oozie job --oozie http://gandhari:11000/oozie/ -Doozie.wf.application.path=hdfs://gandhari:9000/user/hadoop/feed/myflow.xml -dryrun

Unfortunately it is broken with the following error.


Error: E0501 : E0501: Could not perform authorization operation, User: hadoop is not allowed to impersonate hadoop

oozie_workflow

hadoop is my OS user. It is the user who is running the Oozie daemon as well. core-site.xml should contain the following entry to proxy this user.


<property>
<name>hadoop.proxyuser.hadoop.groups</name>
<value>*</value>
</property>

<property>
<name>hadoop.proxyuser.hadoop.hosts</name>
<value>gandhari</value>
</property>
</configuration>

hadoop – OS user name

gandhari – hostname

 

java.io.IOException: Filesystem closed

Hi hadoopers,

Here is the exception that screwed up me on Saturday night and failed my Mapper task.

  • Mapper is reading the lines one by one and tokenize it.
  • The last token contains a path of a file in HDFS.
  • I need to open the file and read the contents.

For the above task, following is the flow I followed in the Mapper.

hadoop045-filesystem

Worse, my mapper failed with the following exception.

org.apache.hadoop.mapred.MapTask: Ignoring exception during close for org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader@1cb3ec38
java.io.IOException: Filesystem closed
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:689)
at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:617)

Filesystem object is suppose to be global. When I close the filesystem, the Mapper input is also closed which breaks the complete flow. So I closed only the filestream, but I didn’t close the file system explicitly which resolved the problem.

Ref: https://github.com/linkedin/gobblin/issues/1219

 

ERROR: Can’t get master address from ZooKeeper; znode data == null

I was setting up HBase and ZooKeeper for a lab exercise. After configuring them, I launched status command in hbase shell. I ended up with below given error.

ERROR: Can't get master address from ZooKeeper; znode data == null

The dfs and yarn daemons should be running to get valid output from hbase shell. It is resolved after starting dfs and yarn daemons.

 

Error while starting Hive – Relative path in absolute URI: ${system:java.io.tmpdir%7D/$%7Bsystem:user.name%7D

I got this error while starting Hive for the first time. WRT to

java.net.URISyntaxException when starting HIVE and AdminManual Configuration  I made the following changes to make it working


<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp</value>
</property>
<property>
<name>hive.downloaded.resources.dir</name>
<value>/tmp</value>
</property>
<property>
<name>hive.querylog.location</name>
<value>/tmp</value>
</property>
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>/tmp/operation_logs</value>
</property>

Error while Starting Hive – DatastoreDriverNotFoundException

Here is a scary exception thrown out when I started Hive.

org.datanucleus.store.rdbms.connectionpool.DatastoreDriverNotFoundException: The specified datastore driver (“com.mysql.jdbc.Driver”) was not found in the CLASSPATH. Please check your CLASSPATH specification, and the name of the driver.

I forgot to copy the mysql driver to hive lib folder. Here is the command to copy the same

cp /usr/share/java/mysql-connector-java-5.1.38.jar /opt/hadoop/hive/lib/

 

VirtualBox: The image file is inaccessible and is being ignored

The image file ‘E:\Downloads\ubuntu-16.04-desktop-i386.iso’ is inaccessible and is being ignored. Please select a different image file for the virtual DVD drive..

I mounted my ubuntu .iso file. When I started the new and blank VirtualBox VM, I selected the mounted drive as the start up drive. The booting crashed with the above given error.

I dismounted it. I removed the virtual optical drive in the VM settings. Then I pointed the .iso file directly as DVD drive to resolve this issue.

Oracle Virtual Box