Search Results

Search found 22413 results on 897 pages for 'train main'.

Page 141/897 | < Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >

  • HTML5 or Javascript game engine to develop a browser game

    - by Jack Duluoz
    I would like to start developing a MMO browser game, like Travian or Ogame, probably involving also a bit of more sophisticated graphical features such as players interacting in real time with a 2d map or something like that. My main doubt is what kind of development tools I should use: I've a good experience with PHP and MySQL for the server side and Javascript (and jQuery) regarding the client side. Coding everything from scratch would be of course really painful so I was wondering if I should use a javascript game engine or not. Are there (possibly free) game engine you would recommend? Are they good enough to develop a big game? Also, I saw a lot of HTML5 games popping up lately but I'm now sure if using HTML5 is a good idea or not. Would you recommend it? What are the pro and cons about using HTML5? If you'd recommend it, do you have any good links regarding game development with HTML5? (PS: I know that HTML5 and a Javascript engine are not mutually exclusive, I just didn't know how to formulate a proper title since English is not my main language. So, please, answer addressing HTML5 and a game engine pro and cons separately)

    Read the article

  • How to set Alpha value from pixel shader in SlimDX Direct3d9

    - by Yashwinder
    I am trying to set alpha value of color as color.a = 0.5f in my pixel shader but all the time it is giving an exception. I can set color.r, color.g, color.b but it is not allowing me to set color.a and throwing an exception D3DERR_INVALIDCALL: Invalid call (-2005530516). I have just created a direct3d9 device and assigned my pixel shader to it. My pixel shader code is as below sampler2D ourImage : register(s0); float4 main(float2 locationInSource : TEXCOORD) : COLOR { float4 color = tex2D( ourImage , locationInSource.xy); color.a = 0.2; return color; } I am creating my pixel shader as byte[] byteCode = GiveFxFile(transitionEffect.PixelShaderFileName); var shaderBytecode = ShaderBytecode.Compile(byteCode, "main", "ps_2_0", ShaderFlags.None); var pixelShader = new PixelShader(device, ShaderBytecode); _device.PixelShader=pixelShader; I have initialized my device as var _presentParams = new PresentParameters { Windowed = _isWindowedMode, BackBufferWidth = (int)SystemParameters.PrimaryScreenWidth, BackBufferHeight = (int)SystemParameters.PrimaryScreenHeight, // Enable Z-Buffer // This is not really needed in this sample but real applications generaly use it EnableAutoDepthStencil = true, AutoDepthStencilFormat = Format.D16, // How to swap backbuffer in front and how many per screen refresh BackBufferCount = 1, SwapEffect = SwapEffect.Copy, BackBufferFormat = _direct3D.Adapters[0].CurrentDisplayMode.Format, PresentationInterval = PresentInterval.Immediate, DeviceWindowHandle = _windowHandle }; _device = new Device(_direct3D, 0, DeviceType.Hardware, _windowHandle, deviceFlags | CreateFlags.Multithreaded, _presentParams);

    Read the article

  • forward all mail on a specified domain to script

    - by David
    Hey all! I run a disposable e-mail service that accepts all incoming mail and forwards it to a PHP script that stores it in a database for people to view. Before now, I have been on shared hosting with cPanel, which makes it easy to pipe e-mails to a script. Now, however, I got my own VPS, and it doesn't have cPanel. How do I pipe e-mails to script? Further, how do I pipe emails to any address on certain specified domains to my script? You see, aside from the main domain, there are several alternate domains that people can use if the main domain is blocked, and on each domain I want any address to be usable (xyz@domain, abc@domain, anythingelse@domain). The VPS has Ubuntu 9.04 installed, and I have been experimenting with Postfix, though I can switch to Exim or Sendmail if it is easier.

    Read the article

  • Ext3 partition doesn't mount on Snow Leopard using MacFUSE

    - by Fez
    I'm dual-booting OS X and Ubuntu on a Macbook 4,1. I'm trying to mount my Linux partition in OS X. I installed MacFUSE 2.0.3,2 and fuse-ext2-0.0.7 on Snow Leopard 10.6.5. I created the directory /Volumes/Ubuntu and tried to mount the disk there using the command: fuse-ext2 /dev/disk0s4 /Volumes/Ubuntu/ This is the output I get: fuse-ext2: version:'0.0.7', fuse_version:'27' [main (../../fuse-ext2/fuse-ext2.c:324)] fuse-ext2: enter [do_probe (../../fuse-ext2/do_probe.c:30)] fuse-ext2: Error while trying to open /dev/disk0s4 (rc=13) [do_probe (../../fuse-ext2/do_probe.c:34)] fuse-ext2: Probe failed [main (../../fuse-ext2/fuse-ext2.c:340)] Any clue what's going wrong? Thanks!

    Read the article

  • Bind the windows key to Lubuntu start menu

    - by abel
    I am running Lubuntu 11.10. By default the main menu is bound to Alt+F1 (A-F1) which works. Here is the relevant code from ~/.config/openbox/lubuntu-rc.xml <keybind key="A-F1"> <action name="Execute"> <command>lxpanelctl menu</command> </action> </keybind> This works. When I hit Alt+F1, I can see the start menu. If I change the keys to "Windows key + M" (W-m), I can pull up the start menu using Win+M <keybind key="W-m"> <action name="Execute"> <command>lxpanelctl menu</command> </action> </keybind> However, I cannot bind the start menu to the Windows key alone. If I try replacing "W-m" by "W", the "W" alphabet key gets bound to the start menu. If I try "W-" nothing happens, I have tried the "Super" option too but to no avail. How can I bind the Lubuntu main menu to the windows Key? I have been through some relevant lubuntu questions, like this one, which tries to do the opposite. How do I unbind Super key from menu in Lubuntu

    Read the article

  • One domain, dedicated SSL IP on whm

    - by Vanja D.
    It's long, but please read carefully. I am trying to install an SSL certificate on my dedicated server with WHM/cPanel. I have a dedicated IP to use with the SSL certificate. My main domain is example.com (NOT www.example.com), and I have an account and website already running on it. I bought the certificate for the main domain (example.com without www.). I installed the certificate (successfully). I used the example.com domain, the dedicated IP and the same cPanel user which owns example.com (non-ssl) I double checked ConfigServer for port 443 being open. RESULT: https://example.com won't open, ssl check tool returns a "SSL is not configured on this port (443)" error. I have three questions: where did I go wrong, wht did I miss? is it possible to have one domain on two ips (one for http, one for https)? is it possible to have an ssl host with the same user as the regular one?

    Read the article

  • URL Redirection in Multisite wordpress

    - by Toqeer
    We have multi-site wordpress containing more then 50 blogs/sub-site. Our base URL to wordpress site is www.example.com/base-site/ and we have other sub-sites in it like www.example.com/base-site/site1 site2 ... etc. Now My question is to redirect the main-site to one of the subsites but a simple redirect 301 is not working. I tried some solutions of mod-rewrite but its not working either for this main-site to redirect to sub-site. A solution is required to Redirect www.example.com/base-site/ to www.example.com/base-site/site1 Solution used so far but not working for me solution1 solution2

    Read the article

  • How do I get vmbuilder to progress?

    - by Avery Chan
    I've used the following command to create my vm: vmbuilder kvm ubuntu --verbose --suite=precise --flavour=virtual --arch=amd64 -o --libvirt=qemu:///system --tmpfs=- --ip=192.168.2.1 --part=/home/shared/vm1/vmbuilder.partition --templates=/home/shared/vm1/templates --user=vadmin --name=VM-Administrator --pass=vpass --addpkg=vim-nox --addpkg=unattended-upgrades --addpkg=acpid --firstboot=/home/shared/vm1/boot.sh --mem=256 --hostname=chameleon --bridge=br0 I've been trying to follow the direction here. My system just outputs this and it hangs at the last line: 2012-06-26 18:08:29,225 INFO : Mounting tmpfs under /tmp/tmpJbf1dZtmpfs 2012-06-26 18:08:29,234 INFO : Calling hook: preflight_check 2012-06-26 18:08:29,243 INFO : Calling hook: set_defaults 2012-06-26 18:08:29,244 INFO : Calling hook: bootstrap How can I get vmbuilder to continue the process instead of dying right here? I'm running 12.04. EDIT: Adding some additional output details When I ^C to get out of the hang I see this: ^C2012-06-26 18:19:29,622 INFO : Unmounting tmpfs from /tmp/tmpJbf1dZtmpfs Traceback (most recent call last): File "/usr/bin/vmbuilder", line 24, in <module> cli.main() File "/usr/lib/python2.7/dist-packages/VMBuilder/contrib/cli.py", line 216, in main distro.build_chroot() File "/usr/lib/python2.7/dist-packages/VMBuilder/distro.py", line 83, in build_chroot self.call_hooks('bootstrap') File "/usr/lib/python2.7/dist-packages/VMBuilder/distro.py", line 67, in call_hooks call_hooks(self, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/util.py", line 165, in call_hooks getattr(context, func, log_no_such_method)(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/plugins/ubuntu/distro.py", line 136, in bootstrap self.suite.debootstrap() File "/usr/lib/python2.7/dist-packages/VMBuilder/plugins/ubuntu/dapper.py", line 269, in debootstrap run_cmd(*cmd, **kwargs) File "/usr/lib/python2.7/dist-packages/VMBuilder/util.py", line 113, in run_cmd fds = select.select([x.file for x in [mystdout, mystderr] if not x.closed], [], [])[0]

    Read the article

  • Nginx, logrotate and empty files

    - by user37887
    Hi. I have a problem with nginx/logrotate. The problems is that nginx is logging access to 2 files (main and data). I have the following contrab setting: 0 * * * * /usr/sbin/logrotate -f /home/orwell/orwell-setup/bin/logrotate-nginx And the file "logrotate-nginx" has the following content: /tmp/data.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } /tmp/main.log { rotate 90 daily missingok notifempty size 1 sharedscripts postrotate [ ! -f /tmp/nginx.pid ] || kill -USR1 `cat /tmp/nginx.pid` MORE THINGS endscript } The work is done in the two files, but there is a problem that nginx stops logging into those files. Both files are created, but they are empty. Any ideas why nginx stop logging info to both files?

    Read the article

  • XFCE: active panel buttons when mouse is on screen edge

    - by Dave Vogt
    I'm using XFCE 4.6.1 (Xubuntu distribution) on my laptop and main computer; the settings are approximately the same. so far for the intro. What I'm experiencing is that when going to the screen edge over the task bar on the laptop, the button under the mouse is active. On the main machine however, having the mouse on the screen edge, the button below it doesn't react at all! Only if I move the pointer towards the center a bit, the hover highlight starts and the button becomes clickable. I've tried to change the panel size, desktop theme and a few other settings, but none seems to cure that problem. Is there something that causes this problem? (Googling also seems to give no results)

    Read the article

  • Object model design: collections on classes

    - by Luke Puplett
    Hi all, Consider Train.Passengers, what type would you use for Passengers where passengers are not supposed to be added or removed by the consuming code? I'm using .NET Framework, so this discussion would suit .NET, but it could apply to a number of modern languages/frameworks. In the .NET Framework, the List is not supposed to be publicly exposed. There's Collection and ICollection and guidance, which I tend to agree with, is to return the closest concrete type down the inheritance tree, so that'd be Collection since it is already an ICollection. But Collection has read/write semantics and so possibly it should be a ReadOnlyCollection, but its arguably common sense not to alter the contents of a collection that you don't have intimate knowledge about so is it necessary? And it requires extra work internally and can be a pain with (de)serialization. At the extreme ends I could just return Person[] (since LINQ now provides much of the benefits that previously would have been afforded by a more specified collection) or even build a strongly-typed PersonCollection or ReadOnlyPersonCollection! What do you do? Thanks for your time. Luke

    Read the article

  • VisualSVN Server won't work with AD, will with local accounts

    - by frustrato
    Decided recently to switch VisualSVN from local users to AD users, so we could easily add other employees. I added myself, gave Read/Write privileges across the whole repo, and then tried to log in. Whether I'm using tortoisesvn or the web client, I get a 403 Forbidden error: You don't have permission to access /svn/main/ on this server. I Googled a bit, but only found mention of phantom groups in the authz file. I don't have any of those. Any ideas? It works just fine with local accounts. EDIT: Don't know why I didn't try this earlier, but adding the domain before the username makes it work, ie MAIN/Bob. This normally only works when there are conflicting usernames...one local, one in AD, but for whatever reason it works here too. Kinda silly, but I can live with it.

    Read the article

  • Formatting HTML lists using CSS

    - by pwaring
    I'm trying to recreate list in HTML which has clauses and subclauses like this: 1. Main Clause (a) Sub clause (b) Sub clause 2. Another main clause (a) Sub clause The problems I'm running into are: If I use the existing HTML elements (ol and li) there doesn't seem to be a list style for (a) - I can have a. b. c. or A. B. C. but not (a) (b) (c). If I don't use the existing HTML elements and start using span tags, then if a subclause runs beyond the end of the line it appears underneath the clause number, rather than being indented. Like so: (a) Very long subclause which goes over one line when what I really want is the behaviour from lists, which is: (a) Very long subclause which goes over one line Is there any way to get round these two problems at the same time? I'd prefer to use semantic HTML and CSS for styling, but having the clauses spaced correctly is more important than doing things 'the right way'. I may need subsubclauses at some point (i.e. (i), (ii) etc.), so I can't assume that (a) will be the maximum clause depth.

    Read the article

  • OpenVPN, Great on Windows, VERY slow on Mac...

    - by Phsion
    Hello, I'm not really an IT Pro, but this seemed like the best place to ask this question... I have setup VPN networks in the past, for fun, and everything was great, but now I've set one up for my boss, and while my computers all work great, his Mac machines are almost too slow to work with. Its pretty much vanilla configs all around, anyone have any ideas? Its a TUN routing setup over UDP. Back Story: My boss travels a lot, and wants to be able to access all his files from the road, and is also pretty paranoid about security (even though knows almost nothing about computers). SO i figured a VPN would be the answer. I went with OpenVPN, but there are some other issues. The only ISP we can get in our area besides Dial-UP is a crappy Satellite provider, that doesn't offer public IPs unless your willing to pay, so while the computers and VPN setup are pretty vanilla, the routing and structure is strange to get around this limitation. Specs: Its OpenVPN2, and there are six machines using it (only three actually use it, the rest are my test machines), one Windows 7 laptop, two XP Desktops, one OS X 10.5 Desktop, one 10.6 Desktop, and one 10.6 Laptop. One XP Desktop sits at my house and acts as the server (6Mbs/2Mbs FIOS connection). One XP desktop sits at the office and hosts a webpage that will wake up the Main Mac Desktop from sleep, and also ping all the machines on the VPN and show their status. The main office mac (10.6) stays in sleep mode until it gets the Wake-On-Lan packet from the Office XP, and then it auto connects to the VPN and opens itself up. The reason for all this is the Satellite private IP crap means i cant directly access the office machines outside of the LAN, so everyone connects to my house first, then they talk to each other from there. The Wake On Lan weirdness is because my boss doesn't want to leave the main Mac on all the time, and making a quick and dirty webpage was the easiest way to send a Magic Packet from inside the LAN without confusing my boss. The VPN uses Client Config files to make static IPs for the client. The only thing i found in google was some changes to the VPN MTU settings (down to 1400) but no real help. Oh, and i forgot...all the windows machines just have OpenVPN start as a service. The Mac laptop uses tunnelblick (an OpenVPN GUI) and the Mac Desktops use OpenVPN in normal command line mode. Server Config: tun-mtu 1500 fragment 1450 mssfix 1450 management localhost #### port #### proto udp dev tun ca ####### cert ####### key ###### dh ###### server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-config-dir ccd route 10.8.0.0 255.255.255.252 client-to-client keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status log Client Configs (all are simple variations on this) tun-mtu 1500 fragment 1450 mssfix 1450 client dev tun proto udp remote ######## #### resolv-retry infinite nobind persist-key presist-tun ca ##### cert ##### key ##### ns-cert-type server comp-lzo verb 3

    Read the article

  • Problem running Qreator on Xubuntu-14.04

    - by Seyed Mohammad
    I installed Qreator using apt-get on Xubuntu-14.04: $ sudo apt-get install qreator But the application fails to start! When I try to run it via Terminal, the following error messages are printed and the program aborts: $ qreator ** (qreator:3859): WARNING **: Couldn't connect to accessibility bus: Failed to connect to socket /tmp/dbus-Gh2FPHrMr2: Connection refused No handlers could be found for logger "qreator_lib" Traceback (most recent call last): File "/usr/bin/qreator", line 47, in <module> qreator.main() File "/usr/lib/python2.7/dist-packages/qreator/__init__.py", line 63, in main window = QreatorWindow.QreatorWindow() File "/usr/lib/python2.7/dist-packages/qreator_lib/Window.py", line 48, in __new__ new_object.finish_initializing(builder) File "/usr/lib/python2.7/dist-packages/qreator/QreatorWindow.py", line 79, in finish_initializing self.init_qr_types() File "/usr/lib/python2.7/dist-packages/qreator/QreatorWindow.py", line 135, in init_qr_types self.qr_types = [d(self.update_qr_code) for d in QRCodeType.dataformats] File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeType.py", line 71, in __init__ self.create_widget() # pylint: disable=E1101 File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeLocation.py", line 29, in create_widget self.widget = QRCodeLocationGtk(self.qr_code_update_func) File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeLocationGtk.py", line 49, in __init__ latitude, longitude = get_current_location() File "/usr/lib/python2.7/dist-packages/qreator/qrcodes/QRCodeLocationGtk.py", line 109, in get_current_location '/org/freedesktop/Geoclue/Providers/Hostip') File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 241, in get_object follow_name_owner_changes=follow_name_owner_changes) File "/usr/lib/python2.7/dist-packages/dbus/proxies.py", line 248, in __init__ self._named_service = conn.activate_name_owner(bus_name) File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 180, in activate_name_owner self.start_service_by_name(bus_name) File "/usr/lib/python2.7/dist-packages/dbus/bus.py", line 278, in start_service_by_name 'su', (bus_name, flags))) File "/usr/lib/python2.7/dist-packages/dbus/connection.py", line 651, in call_blocking message, timeout) dbus.exceptions.DBusException: org.freedesktop.DBus.Error.ServiceUnknown: The name org.freedesktop.Geoclue.Providers.Hostip was not provided by any .service files How can I fix this ?

    Read the article

  • C string question

    - by user208454
    I am writing a simple c program which reverses a string, taking the string from argv[1]. Here is the code: #include <stdio.h> #include <stdlib.h> #include <string.h> char* flip_string(char *string){ int i = strlen(string); int j = 0; // Doesn't really matter all I wanted was the same size string for temp. char* temp = string; puts("This is the original string"); puts(string); puts("This is the \"temp\" string"); puts(temp); for(i; i>=0; i--){ temp[j] = string[i] if (j <= strlen(string)) { j++; } } return(temp); } int main(int argc, char *argv[]){ puts(flip_string(argv[1])); printf("This is the end of the program\n"); } That's basically it, the program compiles and everything but does not return the temp string in the end (just blank space). In the beginning it prints temp fine when its equal to string. Furthermore if I do a character by character printf of temp in the for loop the correct temp string in printed i.e. string - reversed. just when I try to print it to standard out (after the for loop/ or in the main) nothing happens only blank space is printed.

    Read the article

  • Oracle Access Manager 11g - useful links

    - by Dmitry Nefedkin
    The main idea of this post is to collect in a single place the links to the most useful resources for everybody who are interested in Oracle Access Manager 11g.   If you have something valuable to add to this list - just let me know. Official documentation (Oracle Fusion Middleware 11.1.1.5): Administrator's Guide for Oracle Access Manager with Oracle Security Token Service - main guide for the  OAM 11g  administrator/consultant; Integration Guide for Oracle Access Manager - if you're in charge for setting up OAM integration with OIM, OAAM or OIF - that's a guide for you. Also has a chapter on WNA integration; Developer's Guide for Oracle Access Manager and Oracle Security Token Service - learn how to use Java Access JDK and develop custom authentication plugins; Oracle Fusion Middleware High Availability Guide, paragraph 8.8 Oracle Access Manager High Availability - set up HA for your OAM installation; Oracle Fusion Middleware Enterprise Deployment Guide for Oracle Identity Management - learn the best practices of the real-world enterprise deployments.   Useful Oracle Support documents (go to support.oracle.com to retrieve the contents of the documents): OAM Bundle Patch Release History [ID 736372.1] Install and Configure Advisor: Oracle Fusion Middleware (FMW) Identity Access Management (OAM, OIM) 11g [ID 340.1] Procedure to Upgrade OAM 11.1.1.3.0 to OAM 11.1.1.5.0 [ID 1318524.1] OAM 11g: How to Enable Oracle Access Manager 11g Server Trace / Debug Logging [ID 1298296.1] OAM 11g: How To Create and Configure Policies For Application Resources Without Using OAM Console UI [ID 1393918.1] How To Configure X509 Authentication On Oracle Access Manager (OAM) 11g [ID 1368211.1] OAM 11g WNA Step by Step Setup Guide [ID 1416860.1]   Blogs: Oracle Access Manager Academy from the Fusion Security Blog OAM Product management blog Oracle IDM blog Books:  Oracle Identity and Access Manager 11g for Administrators

    Read the article

  • Nginx phpmyadmin redirecting to / instead of /phpmyadmin upon login

    - by Frederik Nielsen
    I am having issues with my phpmyadmin on my nginx install. When I enter <ServerIP>/phpmyadmin and logs in, I get redirected to <ServerIP>/index.php?<tokenstuff> instead of <ServerIP>/phpmyadmin/index.php?<tokenstuff> Nginx config file: user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 2; #gzip on; include /etc/nginx/conf.d/*.conf; } Default.conf: server { listen 80; server_name _; #charset koi8-r; #access_log /var/log/nginx/log/host.access.log main; location / { root /usr/share/nginx/html; index index.php index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /usr/share/nginx/html; try_files $uri =404; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } location /phpmyadmin { root /usr/share/; index index.php index.html index.htm; location ~ ^/phpmyadmin/(.+\.php)$ { try_files $uri =404; root /usr/share/; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; } location ~* ^/phpmyadmin/(.+\.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt))$ { root /usr/share/; } } } (Any general tips on tidying op those config files are accepted too)

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • Installing mysql-server on 10.04LTS gives "404 Not Found" error

    - by bc1
    Hi I am trying to install mysql on Ubuntu 10.04LTS (Lucid Lynx) and I am getting this error. Is this a server side issue - is the server up? I am running this from the command line on a remote server... sudo apt-get install mysql-server Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient16 libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-client-core-5.1 mysql-common mysql-server-5.1 mysql-server-core-5.1 psmisc Suggested packages: dbishell libipc-sharedcache-perl tinyca mailx The following NEW packages will be installed: libdbd-mysql-perl libdbi-perl libhtml-template-perl libmysqlclient16 libnet-daemon-perl libplrpc-perl mysql-client-5.1 mysql-client-core-5.1 mysql-common mysql-server mysql-server-5.1 mysql-server-core-5.1 psmisc 0 upgraded, 13 newly installed, 0 to remove and 85 not upgraded. Need to get 23.2MB/24.3MB of archives. After this operation, 61.7MB of additional disk space will be used. Do you want to continue [Y/n]? Y Err http://archive.ubuntu.com/ubuntu/ lucid-updates/main mysql-common 5.1.62-0ubuntu0.10.04.1 404 Not Found [IP: 91.189.92.192 80] <more of the same error messages here> Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/m/mysql-dfsg-5.1/mysql-common_5.1.62-0ubuntu0.10.04.1_all.deb 404 Not Found [IP: 91.189.92.166 80] <more of the same error messages here> E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?

    Read the article

  • Data Quality Through Data Governance

    Data Quality Governance Data quality is very important to every organization, bad data cost an organization time, money, and resources that could be prevented if the proper governance was put in to place.  Data Governance Program Criteria: Support from Executive Management and all Business Units Data Stewardship Program  Cross Functional Team of Data Stewards Data Governance Committee Quality Structured Data It should go without saying but any successful project in today’s business world must get buy in from executive management and all stakeholders involved with the project. If management does not fully support a project because they see it is in there and the company’s best interest then they will remove/eliminate funding, resources and allocated time to work on the project. In essence they can render a project dead until it is official killed by the business. In addition, buy in from stake holders is also very important because they can cause delays increased spending in time, money and resources because they do not support a project. Data Stewardship programs are administered by a data steward manager who primary focus is to support, train and manage a cross functional data stewards team. A cross functional team of data stewards are pulled from various departments act to ensure that all systems work to ensure that an organization’s goals are achieved. Typically, data stewards are subject matter experts that act as mediators between their respective departments and IT. Data Quality Procedures Data Governance Committees are composed of data stewards, Upper management, IT Leadership and various subject matter experts depending on a company. The primary goal of this committee is to define strategic goals, coordinate activities, set data standards and offer data guidelines for the business. Data Quality Policies In 1997, Claudia Imhoff defined a Data Stewardship’s responsibility as to approve business naming standards, develop consistent data definitions, determine data aliases, develop standard calculations and derivations, document the business rules of the corporation, monitor the quality of the data in the data warehouse, define security requirements, and so forth. She further explains data stewards responsible for creating and enforcing polices on the following but not limited to issues. Resolving Data Integration Issues Determining Data Security Documenting Data Definitions, Calculations, Summarizations, etc. Maintaining/Updating Business Rules Analyzing and Improving Data Quality

    Read the article

  • Windows Azure Learning Plan - Application Fabric

    - by BuckWoody
    This is one in a series of posts on a Windows Azure Learning Plan. You can find the main post here. This one deals with the Application Fabric for Windows Azure. It serves three main purposes - Access Control, Caching, and as a Service Bus.   Overview and Training Overview and general  information about the Azure Application Fabric, - what it is, how it works, and where you can learn more. General Introduction and Overview http://msdn.microsoft.com/en-us/library/ee922714.aspx Access Control Service Overview http://msdn.microsoft.com/en-us/magazine/gg490345.aspx Microsoft Documentation http://msdn.microsoft.com/en-gb/windowsazure/netservices.aspx Learning and Examples Sources for online and other Azure Appllications Fabric training Application Fabric SDK http://www.microsoft.com/downloads/en/details.aspx?FamilyID=39856a03-1490-4283-908f-c8bf0bfad8a5&displaylang=en Application Fabric Caching Service Primer http://blogs.msdn.com/b/appfabriccat/archive/2010/11/29/azure-appfabric-caching-service-soup-to-nuts-primer.aspx?wa=wsignin1.0 Hands-On Lab: Building Windows Azure Applications with the Caching Service http://www.wadewegner.com/2010/11/hands-on-lab-building-windows-azure-applications-with-the-caching-service/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+WadeWegner+%28Wade+Wegner+-+Technical%29 Architecture  Azure Application Fabric Internals and Architectures for Scale Out and other use-cases. Azure Application Fabric Architecture Guide http://blogs.msdn.com/b/yasserabdelkader/archive/2010/09/12/release-of-windows-server-appfabric-architecture-guide.aspx Windows Azure AppFabric Service Bus - A Deep Dive (Video) http://www.msteched.com/2010/Europe/ASI410 Access Control Service (ACS) High Level Architecture http://blogs.msdn.com/b/alikl/archive/2010/09/28/azure-appfabric-access-control-service-acs-v-2-0-high-level-architecture-web-application-scenario.aspx Applications  and Programming Programming Patterns and Architectures for SQL Azure systems. Various Examples from PDC 2010 on using Azure Application as a Service Bus http://tinyurl.com/2dcnt8o Creating a Distributed Cache using the Application Fabric http://blog.structuretoobig.com/post/2010/08/31/Creating-a-Poor-Mane28099s-Distributed-Cache-in-Azure.aspx  Azure Application Fabric Java SDK http://jdotnetservices.com/

    Read the article

  • Why doesn't Gradle include transitive dependencies in compile / runtime classpath?

    - by Francis Toth
    I'm learning how Gradle works, and I can't understand how it resolves a project transitive dependencies. For now, I have two projects : projectA : which has a couple of dependencies on external libraries projectB : which has only one dependency on projectA No matter how I try, when I build projectB, gradle doesn't include any projectA dependencies (X and Y) in projectB's compile or runtime classpath. I've only managed to make it work by including projectA's dependencies in projectB's build script, which, in my opinion does not make any sense. These dependencies should be automatically attached to projectB. I'm pretty sure I'm missing something but I can't figure out what. I've read about "lib dependencies", but it seems to apply only to local projects like described here, not on external dependencies. Here is the build.gradle I use in the root project (the one that contains both projectA and projectB) : buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.3' } } subprojects { apply plugin: 'java' apply plugin: 'idea' group = 'com.company' repositories { mavenCentral() add(new org.apache.ivy.plugins.resolver.SshResolver()) { name = 'customRepo' addIvyPattern "ssh://.../repository/[organization]/[module]/[revision]/[module].xml" addArtifactPattern "ssh://.../[organization]/[module]/[revision]/[module](-[classifier]).[ext]" } } sourceSets { main { java { srcDir 'src/' } } } idea.module { downloadSources = true } // task that create sources jar task sourceJar(type: Jar) { from sourceSets.main.java classifier 'sources' } // Publishing configuration uploadArchives { repositories { add project.repositories.customRepo } } artifacts { archives(sourceJar) { name "$name-sources" type 'source' builtBy sourceJar } } } This one concerns projectA only : version = '1.0' dependencies { compile 'com.company:X:1.0' compile 'com.company:B:1.0' } And this is the one used by projectB : version = '1.0' dependencies { compile ('com.company:projectA:1.0') { transitive = true } } Thank you in advance for any help, and please, apologize me for my bad English.

    Read the article

  • handling various frame layouts in android

    - by vaibhav
    i'm new to game development and am trying create a Contra or the old tmnt game (but a simple one) like game for android. for the game i decided to divide my main screen in three parts - upper for stats,mid for the game and lower for controls. my main.xml is <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" android:orientation="vertical" > <FrameLayout android:id="@+id/upper_bar" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="1" > </FrameLayout> <FrameLayout android:id="@+id/fl" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="0.5" > </FrameLayout> <FrameLayout android:id="@+id/low_bar" android:layout_width="fill_parent" android:layout_height="fill_parent" android:layout_weight="0.85" > </FrameLayout> </LinearLayout> so i have created the gameview and gameloopthread classes for the mid surface(which is pretty standard). my problem is that how do i draw in the upper and lower frame layouts? should i make new classes for view and thread for each layout , should i do all this in the gameview class itself or is there any better way to implement this?

    Read the article

  • How to obtain flow while pair programming in agile development?

    - by bizso09
    Flow is is concept introduced by Mihaly Csikszentmihalyi In short, it means what most to get into the "zone". You feel immeresed in the task you are doing, you are in deep focus and concentration and the task difficulty is just right for you, but challenging at the same time. When people acquire flow their prodctivity shoots up. Programming requires great deal of mental focus and programmers need to juggle several things in their mind at once. Many like to work in a quite environment where they can direct their full attention to the task. If they are interreupted, it may take several minutes, sometimes hours to get back into flow. I understand that agile way of doing software development is called pair prograaming. This is pormoted in Extreme programming too. It means you put the whole software development team in one room so that communication is seamless. You do programming with your pair because this way you get instant code reviews and fix bugs sooner. However, I alwys had problem obtaining flow while doing pair programming because of the contant stream of interrupts. I'm thinking deep about an issue then all of sudden someone asks me a question from another pair. My train of thought is all lost. How can you obtain and keep flow while doing agile pair programming?

    Read the article

< Previous Page | 137 138 139 140 141 142 143 144 145 146 147 148  | Next Page >