Search Results

Search found 22298 results on 892 pages for 'default'.

Page 185/892 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • Inverted textures

    - by brainydexter
    I'm trying to draw textures aligned with this physics body whose coordinate system's origin is at the center of the screen. (XNA)Spritebatch has its default origin set to top-left corner. I got the textures to be positioned correctly, but I noticed my textures are vertically inverted. That is, an arrow texture pointing Up , when rendered points down. I'm not sure where I am going wrong with the math. My approach is to convert everything in physic's meter units and draw accordingly. Matrix proj = Matrix.CreateOrthographic(scale * graphics.GraphicsDevice.Viewport.AspectRatio, scale, 0, 1); Matrix view = Matrix.Identity; effect.World = Matrix.Identity; effect.View = view; effect.Projection = proj; effect.TextureEnabled = true; effect.VertexColorEnabled = true; effect.Techniques[0].Passes[0].Apply(); SpriteBatch.Begin(SpriteSortMode.BackToFront, BlendState.AlphaBlend, null, DepthStencilState.Default, RasterizerState.CullNone, effect); m_Paddles[1].Draw(gameTime); SpriteBatch.End(); where Paddle::Draw looks like: SpriteBatch.Draw(paddleTexture, mBody.Position, null, Color.White, 0f, new Vector2(16f, 16f), // origin of the texture 0.1875f, SpriteEffects.None, // width of box is 3*2 = 6 meters. texture is 32 pixels wide. to make it 6 meters wide in world space: 6/32 = 0.1875f 0); The orthographic projection matrix seem fine to me, but I am obviously doing something wrong somewhere! Can someone please help me figure out what am i doing wrong here ? Thanks

    Read the article

  • /usr/bin/python Replacement

    - by tikiking1
    If I've changed the target of /usr/bin/python from /usr/bin/python2.7 to /usr/bin/python3.2 (I realize this was an ABSOUTELY HORRIBLE idea) in Ubuntu 12.04.1 LTS. Afterwards, several applications, including software-center and update-manager have stopped working. Insofar as I can tell, this is because they are written in Python2.7. I replaced the default /usr/bin/python shebang with the 2.7 one, and this fixes them on the application level. Switching /usr/bin/python back to /usr/bin/python2.7 really isn't an option, but is there a list of all applications installed by default in Ubuntu 12.04.1 LTS, if installed from a new CD-R, that use a shebang of #!/usr/bin/python instead of #!/usr/bin/pythonX.Y?

    Read the article

  • Linux: Why to change inode size?

    - by FractalizeR
    Hello. Tune2fs allows to change inode size from default 128 bytes to almost anything (but it should be power of two). What can the the reasons of changing default inode size? Here http://kbase.redhat.com/faq/docs/DOC-7433 is written, that this can be done to be able to store ACL attributes inside inodes. What else can be stored inside inode? May be some other attributes? Or anything else? Is there any reason to increase inode size on modern high-capacity drives (2TB and more)?

    Read the article

  • Setup NTPD in Ubuntu to sync Windows AD

    - by David
    I am setting up my Ubuntu 10.0.4 server to sync clock with a Windows 2008 AD. As none of the "remote" has a "*" in front (worst it had a blank space in front), does it mean no time sync source is selected? The offset just keep increasing. I tried ntpdate to the same server it worked OK. remote refid st t when poll reach delay offset jitter ============================================================================== 172.18.133.201 .LOCL. 1 u 38 64 377 0.827 277.677 58.375 Any help to fix this? Here is my ntp.conf (with all comments removed) driftfile /var/lib/ntp/ntp.drift statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable filegen peerstats file peerstats type day enable filegen clockstats file clockstats type day enable restrict -4 default kod notrap nomodify nopeer noquery restrict -6 default kod notrap nomodify nopeer noquery restrict 127.0.0.1 restrict ::1 server 172.18.133.201 iburst

    Read the article

  • Apache restart on Ubuntu - error “could not bind to address 0.0.0.0:80”

    - by william
    I'm a n00b - trying to get apache2 set up on Ubuntu 9.10 (Karmic Koala) on Rackspace Cloud. I have set up/configured OpenSSL and installed Apache, but Apache won't start. I assume its a misconfiguration in my /etc/apache2/sites-available/ssl or /etc/apache2/sites-available/default files) When I try to restart apache using the command: sudo /etc/init.d/apache2 restart I get the following error message: [error] (EAI 2)Name or service not known: Could not resolve host name *.80 -- ignoring! [error] (EAI 2)Name or service not known: Could not resolve host name *.80 -- ignoring! (98)Address already in use: make_sock: could not bind to address 0.0.0.0:80 no listening sockets available, shutting down Unable to open logs ...fail! For my /etc/apache2/sites-available/ssl I have used a virtual host of *:443. For my /etc/apache2/sites-available/default i have used a virtual host of *:80

    Read the article

  • Missing "log on to" field in Windows 2008 password change dialog

    - by Dmitri Chubarov
    I have to setup an embedded standalone CIFS server outside of domain environment. I shall refer to the server as storage_gateway. The default password setting for the Administrator account is Password must be changed. Until Administrator password is changed other local accounts are disabled. While the server is on default password connecting to a share causes fails with NT_STATUS_PASSWORD_MUST_CHANGE error. $ smbclient -L //storage_gateway -U Administrator session setup failed: NT_STATUS_PASSWORD_MUST_CHANGE I have a mixed Linux/Windows environment and prefer smbclient for diagnostics. To change the password in Windows 2003 Server I could press CTRL-ALT-DEL to get into the Windows Security window and change password for the remote server specifying the server name or ip address (xx.xx.xx.xx) in the system password change dialog. (the name root is just an example, I have edited the dialogs to translate them) However in Windows 2008 the corresponding dialog lacks the Log on to field. Is there a way to change a remote password in Windows 2008 Server/Windows 7?

    Read the article

  • Nginx Installation on Ubuntu giving 500 error

    - by user750301
    I just installed nginx on ubuntu 12.04 LTS. When i access localhost it gives me : 500 Internal Server Error nginx/1.2.3 error_log has following rewrite or internal redirection cycle while internally redirecting to "/index.html", client: 127.0.0.1, server: localhost, request: "GET / HTTP/1.1", host: "localhost" This is default nginx configuration: nginx.conf has: include /etc/nginx/sites-enabled/*; /etc/nginx/sites-enabled/default has following root /usr/share/nginx/www; index index.html index.htm; # Make site accessible from http://localhost/ server_name localhost; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules }

    Read the article

  • Change programfiles variable on windows

    - by Fire-Dragon-DoL
    I have built a computer for a user that asked me for "speed": an SSD was the obvious solution because what he means by speed is "fast boot time". This solved the problem, however the user is not smart enough to remember that he must install programs on D rather than C (c is the ssd, D is an raid 1 hdd). The only solution that comes in my mind is changing programfiles variable such that will point to D rather than C by default. Otherwise, other solutions are ok but I really can't find anything else at the moment. Does anyone have recommendations for how to accomplish changing the default installation directory in Windows?

    Read the article

  • How to change MySQL data directory?

    - by Jonathan Frank
    I want to place my databases in another directory, so I can store them in an ESB (elastic block storage, just a fancy name for a virtualized harddisk) together with my web-apps and other persistent data. I have tried to walk through a tutorial at http://crashmag.net/change-the-default-mysql-data-directory-with-selinux-enabled. Everything seems fine until I type this command: # semanage fcontext -a -t mysqld_db_t "/srv/mysql(/.*)?" Then the command fails and tells me that mysqld_db_t is an invalid SELinux context even if the default MySQL data directory is labelled with this context. I am running Fedora 15 on Virtualbox (behaves like an ordinary x86-compatible box) and Amazon EC2 (based on Xen) so the tutorial should be compatible. It is also worth to mention that turning off SELinux globally or just for the MySQL process is not an option, because such a solution will decrease the security of the system if a hacker gains access to the system via the MySQL server. I have never seen this problem before I changed to the Redhat/Fedora architecture, so it could be a distribution specific issue. Any help is highly appreciated

    Read the article

  • Change audio output depending on which one is on

    - by pkrish
    I have a PC that is hooked up to my HDTV (via a long hdmi cable) and to a monitor, which is in another room. I have speakers directly plugged to the PC audio out. The PC is next to the monitor and speakers. I am not sure but I think Windows can play sound on only 1 audio device at a time. And I can only set 1 device as the default output. I can get sound on the TV or speaker depending on which device I set to default. But I would have to do this every time I switch between using my TV or my monitor! Is there some way to configure such that sound plays through the TV if the TV is on, else it plays in the speakers? If this is not possible, then the next best alternative would be to get the sound to play on both the devices at the same time. Thanks!

    Read the article

  • Notes on Oracle BPM PS6 Adaptive Case Management

    - by gcolman
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} I have recently been looking at the  latest release of the BPM Case Management feature in the Oracle BPM PS6 release. I had put together some notes to help me gain a better understanding of the context of the PS6 BPM Case Management. Hopefully, this along with the other resources will enable you to gain a clear picture of the flexibility of this feature. Oracle BPM PS6 release includes Case Management capability. This initial release aims to provide: Case Management Framework Integration of Case Management with BPM & SOA suite It is best to regard the current PS6 case management feature as a case management framework. The framework provides the building blocks for creating a case management system that is fully integrated into Oracle BPM suite. As of the current PS6 release, no UI tooling exists to help manage cases or the case lifecycle. Mark Foster has written a good blog which outlines Case Management within PS6 in the following link. I wanted to provide more context on Case Management from my perspective in this blog. PS6 Case Management - High level View BPM PS6 includes “Case” as a first class component in a SOA Suite composite. The Case components (added to the SOA Composite) are created when a BPM process is assigned to a case in JDveloper. The SOA Case component is defined and configured within JDevloper, which allows us to specify the case data structures and metadata such as stakeholders, outcomes, milestones, document stores etc. "Activities" are associated with a case, and become available to be executed via the case apis. Activities are BPM processes, Human Activities or Java call outs. The PS6 release includes some additional database tables to store the case metadata and case instance data (data object, comments, etc…). These new tables are created within the SOA_INFRA schema and the documents associated with that case into a document repository that is configured with the case. One of the main features of Case Management is the control of the case logic through case events and case business rules. A PS6 Case has an associated business rule component, which can be configured to control the availability and execution of activities within the case. The business rules component is able to act upon events that the PS6 Case Management framework generates during the lifecycle of that case. Events are fired during the lifetime of the case (e.g. Case created, activity started, activity ended, note added, document uploaded.) Internal Case state The internal state of a case is represented by the diagram below. This shows the internal states and the transition paths for a Case from one state to the next Each transition in state will create an event that can be enacted upon via the Case rules engine. The internal case state lifecycle is defined as follows Defining a case A Case is created and defined as a component of a JDeveloper BPM project. When you create a Case as part of a BPM project, JDeveloper, creates the following components within the SCA composite: Case component Case component interfaces (WSDL etc) Case Rules component (Oracle Business Rules) Adds the Case Component and Case Rules Component to the BPM SOA composite Case Configuration The following section gives a high level overview of the items that can be configured for a BPM Case. Case Activities A Case is associated with a set of activities that are to be performed as part of that Case. Case activities can be: SOA Human Tasks BPM processes Custom Task (Java Class) Case activities are created from pre-existing BPM process or human tasks, which, once defined, can be configured additionally as Case activities in JDeveloper and made available within the lifecycle of a case. I've described the following configurable components of a case (very!) briefly as: Milestones Milestones are (optional) user defined logical milestones that can be achieved within a case. No activities are associates with a milestone, but milestone attainment can be programmatically set and events raised when milestones are reached Outcomes User defined status of a completed case. An event is fired when an outcome is attained. Case Data Defines the data that will be stored with a case XML schemas define the data that is stored with the case. Case Documents Defines the location of documents that are attached to a case (e.g. WebCenter Content) User Defined Events Optional user defined events that can be fired or captured to drive case processing rules Stakeholders Defines the actors who can participate in the case (roles, users, groups) Defines permissions for individual case permissions (read case, create document etc…) Business Rules Business rules are the main component controlling the flow of a Case Each case has an associated business ruleset Rules are fired on receiving Case events (or User defined events) Life cycle events Milestone events Activity events Data events Document events Comment events User event Managing the Case Managing the lifecycle of a case is achieved in two ways: Managing case logic with Business Rules Managing the case lifecycle via the Case APIs. A BPM Case can be viewed as a set of case data & documents along with the activities that can be performed within a case and also the case lifecycle state expressed as milestones and internal lifecycle state. The management of the case life is achieved though both the configuration of business rules and the “manual” interaction with a case instance through the Case APIs. Business Rules and Case Events A key component within the Case management framework is the event model. The BPM Case Management solution internally utilizes Oracle EDN (Event Delivery Network) to publish and subscribe to events generated by the Case framework. Events are generated by the Case framework on each of the processes and stages that a case instance will travel on its lifetime. The following case events are part of the BPM Case: Life cycle events Milestone events Activity events Data events Document events Comment events User event The Case business rules are configured to listen for these events, and business logic can be coded into the Case rules component to enact upon an event being received. Case API & Interaction Along with the business rules component, Cases can be managed via the Case API interfaces. These interfaces allow for the building of custom applications to integrate into case management framework. The API’s allow for updating case comments & documents, executing case activities, updating milestones etc. As there is no in built case management UI functions within the PS6 release, Cases need to be managed via a custom built UI, interacting with selected case instances, launching case activities, closing cases etc. (There is expected to be a UI component within subsequent releases) Logical Case Flow The diagram below is intended to depict a logical view of the case steps for a typical case. A UI or other service calls the Case interface to create a Case instance The case instance is created & database data inserted A lifecycle event is raised indicating a case activity (created) event The case business rules capture the event and decide on an action to take Additionally other parties can subscribe to Case events via EDN The business rules may handle the event, e.g. configured to execute a case activity on case creation event The BPM/Human Workflow/Custom activity is executed A case activity event is raised on the execute activity A case work UI or business service can inspect the case instance and call other actions to progress that case, such as: Execute activity Add Note Add document Add case data Update Milestone Raise user defined event Suspend case Resume case Close Case Summary Having had a little time to play around with the APIs and the case configuration, I really like the flexibility and power of combining Oracle Business Rules and the BPM Case Management event model. Creating something this flexible and powerful without BPM Case Management would take a lot of time and effort. This is hopefully going to save my customers a lot of time and effort! I may make amendments to this post as my understanding of Case Management increases! Take a look at the following links for official documentation etc. http://docs.oracle.com/cd/E28280_01/doc.1111/e15176/case_mgmt_bpmpd.htm https://blogs.oracle.com/bpm/entry/just_in_case Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • Function keys on Dell laptop work double as OEM keys

    - by Factor Mystic
    I'm working with a new Dell Studio 1555, and the F1-F12 keys at the top of the keyboard are dual function with OEM keys such as volume and screen brightness. The problem is, is that the OEM keys are the default, and you have to press the Fn key to get the F- key to work. For example, this means you have to hit Alt+Fn+F4 to close a window, instead of the regular Alt+F4. This is really annoying. Is there a way to reverse the default functions of the F- keys in Windows? Ideally this is possible without some kind of third party hotkey manager.

    Read the article

  • IP route ppp0 + eth0 access to outside network

    - by Vitor
    I need some help in define a route I have two connections one from eth0 and other a ppp0 (a 3G card) Not having the ppp0 connection active my route table is: Destination Gateway Genmask Flags Metric Ref Use Iface default DD-WRT 0.0.0.0 UG 100 0 0 eth0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 I can access my webserver from an outside network through ethernet interface Than I have also my ppp0 3G connection active havig the following route table: D estination Gateway Genmask Flags Metric Ref Use Iface default 10.64.64.64 0.0.0.0 UG 0 0 0 ppp0 10.64.64.64 * 255.255.255.255 UH 0 0 0 ppp0 192.168.1.0 * 255.255.255.0 U 0 0 0 eth0 Now I only can access my webserver in outside networks through the IP of the 3G connection Note that my server is serving at 0.0.0.0 IP (to all interfaces) But I need to get access to webserver to both interfaces ethernet and 3G connection I only can have access to both connection in local network Any help to configure this network to have both interfaces with outside networks access is welcome Can anyone give me an example to configure this network with 2 gateways to give outside networks access One for IP 192.168.1.149 and other for the ppp0 IP 89.214.60.196 Tanks

    Read the article

  • Torque jobs does not enter "E" state (unless "qrun")

    - by Vi.
    Jobs I add to the queue stays there in "Queued" state without attempts to be executed (unless I manually qrun them) /var/spool/torque/server_logs say just 04/11/2011 12:43:27;0100;PBS_Server;Job;16.localhost;enqueuing into batch, state 1 hop 1 04/11/2011 12:43:27;0008;PBS_Server;Job;16.localhost;Job Queued at request of test@localhost, owner = test@localhost, job name = Qqq, queue = batch The job requires just 1 CPU on 1 node. # qmgr -c "list queue batch" Queue batch queue_type = Execution total_jobs = 0 state_count = Transit:0 Queued:0 Held:0 Waiting:0 Running:0 Exiting:0 max_running = 3 acl_host_enable = True acl_hosts = localhost resources_min.ncpus = 1 resources_min.nodect = 1 resources_default.ncpus = 1 resources_default.nodes = 1 resources_default.walltime = 00:00:10 mtime = Mon Apr 11 12:07:10 2011 resources_assigned.ncpus = 0 resources_assigned.nodect = 0 kill_delay = 3 enabled = True started = True I can't set resources_assigned to nonzero because of Cannot set attribute, read only or insufficient permission resources_assigned.ncpus. When I qrun some task, this goes to mom's log: 04/11/2011 21:27:48;0001; pbs_mom;Svr;pbs_mom;LOG_DEBUG::mom_checkpoint_job_has_checkpoint, FALSE 04/11/2011 21:27:48;0001; pbs_mom;Job;TMomFinalizeJob3;job 18.localhost started, pid = 28592 04/11/2011 21:27:48;0080; pbs_mom;Job;18.localhost;scan_for_terminated: job 18.localhost task 1 terminated, sid=28592 04/11/2011 21:27:48;0008; pbs_mom;Job;18.localhost;job was terminated 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;top of preobit_reply 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;DIS_reply_read/decode_DIS_replySvr worked, top of while loop 04/11/2011 21:27:48;0080; pbs_mom;Svr;preobit_reply;in while loop, no error from job stat 04/11/2011 21:27:48;0080; pbs_mom;Job;18.localhost;obit sent to server Scheduler log (/var/spool/torque/sched_logs/20110705): 07/05/2011 21:44:53;0002; pbs_sched;Svr;Log;Log opened 07/05/2011 21:44:53;0002; pbs_sched;Svr;TokenAct;Account file /var/spool/torque/sched_priv/accounting/20110705 opened 07/05/2011 21:44:53;0002; pbs_sched;Svr;main;/usr/sbin/pbs_sched startup pid 16234 qstat -f: Job Id: 26.localhost Job_Name = qwe Job_Owner = test@localhost job_state = Q queue = batch server = localhost Checkpoint = u ctime = Tue Jul 5 21:43:31 2011 Error_Path = localhost:/home/test/jscfi/default/0.738784810485275/qwe.e26 Hold_Types = n Join_Path = n Keep_Files = n Mail_Points = a mtime = Tue Jul 5 21:43:31 2011 Output_Path = localhost:/home/test/jscfi/default/0.738784810485275/qwe.o26 Priority = 0 qtime = Tue Jul 5 21:43:31 2011 Rerunable = True Resource_List.ncpus = 1 Resource_List.neednodes = 1:ppn=1 Resource_List.nodect = 1 Resource_List.nodes = 1:ppn=1 Resource_List.walltime = 00:01:00 substate = 10 Variable_List = PBS_O_HOME=/home/test,PBS_O_LANG=en_US.UTF-8, PBS_O_LOGNAME=test, PBS_O_PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games, PBS_O_MAIL=/var/mail/test,PBS_O_SHELL=/bin/sh,PBS_SERVER=127.0.0.1, PBS_O_WORKDIR=/home/test/jscfi/default/0.738784810485275, PBS_O_QUEUE=batch,PBS_O_HOST=localhost euser = test egroup = test queue_rank = 1 queue_type = E etime = Tue Jul 5 21:43:31 2011 submit_args = run.pbs Walltime.Remaining = 6 fault_tolerant = False How to make it execute jobs automatically, without manual qrun?

    Read the article

  • Windows Deployment Services

    - by timbrigham
    I have a slightly advanced Windows Deployment Services setup. My router hands out DHCP addresses, including the following config. ip dhcp pool Servers_100 network 192.168.100.0 255.255.255.0 bootfile boot\\x86\\pxelinux.0 next-server 192.168.100.50 default-router 192.168.100.1 dns-server 192.168.100.80 192.168.100.81 This works perfectly for other subnets - I have a couple screens in my pxelinux that allow me to select my various Linux installers or enter the windows preboot environment. For some reason I'm only receiving the default bootfile that opens to the windows preboot environment. Any idea why?

    Read the article

  • BIEE Drilling Down and then Across

    - by Tim Dexter
    Slightly off topic today but if you are working with OBIEE in conjunction with BIP its not that far off. Some of you may know, I now get to play with the whole BI suite, I have been for nearly 2 years. Today, I was working with BIEE and wanted to share what I thought was a neat trick. I have to thank Rob Lindsley on our team for the pointers to get it working. The problem I had was that I had set up a drill down hierarchy that took the user down a couple of levels to the bottom project number level. I needed for the user to then be able to click the project number to navigate across to another more detailed report on that project. By default, there is no link, you are at the bottom of a hierarchical drill! There is nothing you can do in the data model (that Im aware of) but you can use a neat trick to get BIEE to allow you to navigate from the bottom rung of the hierarchy. Add the bottom level column to an Answer report. Go into the column properties and set the navigation target. The trick is to then set the current column properties as the system-wide default for that column. You can then actually delete the column from your report. Now as you drill down the hierarchy and reach what was the bottom you will still have a link for the user to punch over to the detail report, sweeeet! The other benefit is that whenever you add the column to a report the link will be available to the detail report, unless you want to override it of course.

    Read the article

  • Building Simple Workflows in Oozie

    - by dan.mcclary
    Introduction More often than not, data doesn't come packaged exactly as we'd like it for analysis. Transformation, match-merge operations, and a host of data munging tasks are usually needed before we can extract insights from our Big Data sources. Few people find data munging exciting, but it has to be done. Once we've suffered that boredom, we should take steps to automate the process. We want codify our work into repeatable units and create workflows which we can leverage over and over again without having to write new code. In this article, we'll look at how to use Oozie to create a workflow for the parallel machine learning task I described on Cloudera's site. Hive Actions: Prepping for Pig In my parallel machine learning article, I use data from the National Climatic Data Center to build weather models on a state-by-state basis. NCDC makes the data freely available as gzipped files of day-over-day observations stretching from the 1930s to today. In reading that post, one might get the impression that the data came in a handy, ready-to-model files with convenient delimiters. The truth of it is that I need to perform some parsing and projection on the dataset before it can be modeled. If I get more observations, I'll want to retrain and test those models, which will require more parsing and projection. This is a good opportunity to start building up a workflow with Oozie. I store the data from the NCDC in HDFS and create an external Hive table partitioned by year. This gives me flexibility of Hive's query language when I want it, but let's me put the dataset in a directory of my choosing in case I want to treat the same data with Pig or MapReduce code. CREATE EXTERNAL TABLE IF NOT EXISTS historic_weather(column 1, column2) PARTITIONED BY (yr string) STORED AS ... LOCATION '/user/oracle/weather/historic'; As new weather data comes in from NCDC, I'll need to add partitions to my table. That's an action I should put in the workflow. Similarly, the weather data requires parsing in order to be useful as a set of columns. Because of their long history, the weather data is broken up into fields of specific byte lengths: x bytes for the station ID, y bytes for the dew point, and so on. The delimiting is consistent from year to year, so writing SerDe or a parser for transformation is simple. Once that's done, I want to select columns on which to train, classify certain features, and place the training data in an HDFS directory for my Pig script to access. ALTER TABLE historic_weather ADD IF NOT EXISTS PARTITION (yr='2010') LOCATION '/user/oracle/weather/historic/yr=2011'; INSERT OVERWRITE DIRECTORY '/user/oracle/weather/cleaned_history' SELECT w.stn, w.wban, w.weather_year, w.weather_month, w.weather_day, w.temp, w.dewp, w.weather FROM ( FROM historic_weather SELECT TRANSFORM(...) USING '/path/to/hive/filters/ncdc_parser.py' as stn, wban, weather_year, weather_month, weather_day, temp, dewp, weather ) w; Since I'm going to prepare training directories with at least the same frequency that I add partitions, I should also add that to my workflow. Oozie is going to invoke these Hive actions using what's somewhat obviously referred to as a Hive action. Hive actions amount to Oozie running a script file containing our query language statements, so we can place them in a file called weather_train.hql. Starting Our Workflow Oozie offers two types of jobs: workflows and coordinator jobs. Workflows are straightforward: they define a set of actions to perform as a sequence or directed acyclic graph. Coordinator jobs can take all the same actions of Workflow jobs, but they can be automatically started either periodically or when new data arrives in a specified location. To keep things simple we'll make a workflow job; coordinator jobs simply require another XML file for scheduling. The bare minimum for workflow XML defines a name, a starting point, and an end point: <workflow-app name="WeatherMan" xmlns="uri:oozie:workflow:0.1"> <start to="ParseNCDCData"/> <end name="end"/> </workflow-app> To this we need to add an action, and within that we'll specify the hive parameters Also, keep in mind that actions require <ok> and <error> tags to direct the next action on success or failure. <action name="ParseNCDCData"> <hive xmlns="uri:oozie:hive-action:0.2"> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <configuration> <property> <name>oozie.hive.defaults</name> <value>/user/oracle/weather_ooze/hive-default.xml</value> </property> </configuration> <script>ncdc_parse.hql</script> </hive> <ok to="WeatherMan"/> <error to="end"/> </action> There are a couple of things to note here: I have to give the FQDN (or IP) and port of my JobTracker and NameNode. I have to include a hive-default.xml file. I have to include a script file. The hive-default.xml and script file must be stored in HDFS That last point is particularly important. Oozie doesn't make assumptions about where a given workflow is being run. You might submit workflows against different clusters, or have different hive-defaults.xml on different clusters (e.g. MySQL or Postgres-backed metastores). A quick way to ensure that all the assets end up in the right place in HDFS is just to make a working directory locally, build your workflow.xml in it, and copy the assets you'll need to it as you add actions to workflow.xml. At this point, our local directory should contain: workflow.xml hive-defaults.xml (make sure this file contains your metastore connection data) ncdc_parse.hql Adding Pig to the Ooze Adding our Pig script as an action is slightly simpler from an XML standpoint. All we do is add an action to workflow.xml as follows: <action name="WeatherMan"> <pig> <job-tracker>localhost:8021</job-tracker> <name-node>localhost:8020</name-node> <script>weather_train.pig</script> </pig> <ok to="end"/> <error to="end"/> </action> Once we've done this, we'll copy weather_train.pig to our working directory. However, there's a bit of a "gotcha" here. My pig script registers the Weka Jar and a chunk of jython. If those aren't also in HDFS, our action will fail from the outset -- but where do we put them? The Jython script goes into the working directory at the same level as the pig script, because pig attempts to load Jython files in the directory from which the script executes. However, that's not where our Weka jar goes. While Oozie doesn't assume much, it does make an assumption about the Pig classpath. Anything under working_directory/lib gets automatically added to the Pig classpath and no longer requires a REGISTER statement in the script. Anything that uses a REGISTER statement cannot be in the working_directory/lib directory. Instead, it needs to be in a different HDFS directory and attached to the pig action with an <archive> tag. Yes, that's as confusing as you think it is. You can get the exact rules for adding Jars to the distributed cache from Oozie's Pig Cookbook. Making the Workflow Work We've got a workflow defined and have collected all the components we'll need to run. But we can't run anything yet, because we still have to define some properties about the job and submit it to Oozie. We need to start with the job properties, as this is essentially the "request" we'll submit to the Oozie server. In the same working directory, we'll make a file called job.properties as follows: nameNode=hdfs://localhost:8020 jobTracker=localhost:8021 queueName=default weatherRoot=weather_ooze mapreduce.jobtracker.kerberos.principal=foo dfs.namenode.kerberos.principal=foo oozie.libpath=${nameNode}/user/oozie/share/lib oozie.wf.application.path=${nameNode}/user/${user.name}/${weatherRoot} outputDir=weather-ooze While some of the pieces of the properties file are familiar (e.g., JobTracker address), others take a bit of explaining. The first is weatherRoot: this is essentially an environment variable for the script (as are jobTracker and queueName). We're simply using them to simplify the directives for the Oozie job. The oozie.libpath pieces is extremely important. This is a directory in HDFS which holds Oozie's shared libraries: a collection of Jars necessary for invoking Hive, Pig, and other actions. It's a good idea to make sure this has been installed and copied up to HDFS. The last two lines are straightforward: run the application defined by workflow.xml at the application path listed and write the output to the output directory. We're finally ready to submit our job! After all that work we only need to do a few more things: Validate our workflow.xml Copy our working directory to HDFS Submit our job to the Oozie server Run our workflow Let's do them in order. First validate the workflow: oozie validate workflow.xml Next, copy the working directory up to HDFS: hadoop fs -put working_dir /user/oracle/working_dir Now we submit the job to the Oozie server. We need to ensure that we've got the correct URL for the Oozie server, and we need to specify our job.properties file as an argument. oozie job -oozie http://url.to.oozie.server:port_number/ -config /path/to/working_dir/job.properties -submit We've submitted the job, but we don't see any activity on the JobTracker? All I got was this funny bit of output: 14-20120525161321-oozie-oracle This is because submitting a job to Oozie creates an entry for the job and places it in PREP status. What we got back, in essence, is a ticket for our workflow to ride the Oozie train. We're responsible for redeeming our ticket and running the job. oozie -oozie http://url.to.oozie.server:port_number/ -start 14-20120525161321-oozie-oracle Of course, if we really want to run the job from the outset, we can change the "-submit" argument above to "-run." This will prep and run the workflow immediately. Takeaway So, there you have it: the somewhat laborious process of building an Oozie workflow. It's a bit tedious the first time out, but it does present a pair of real benefits to those of us who spend a great deal of time data munging. First, when new data arrives that requires the same processing, we already have the workflow defined and ready to run. Second, as we build up a set of useful action definitions over time, creating new workflows becomes quicker and quicker.

    Read the article

  • I tried changing my folder icon in windows XP and now whenever I click a folder it opens in a new wi

    - by widgisoft
    On some random afternoon I figured I could do better than the boring yellow icon; After a vain attempt at trying to change it to no avail I realised that now whenever I open a folder it opens in a "search" window. This happened because when I go to the "(file folder)" type, the only option in there is "find" that is not supposed to be the default; upon saving this form XP tries to do me a favour and sets this as the new default. Now whenever I try to click a folder it opens the search window. great.

    Read the article

  • User cannot book a room resource in Exchange 2010

    - by bakesale
    Im having a problem with a specific user not being able to book a room resource. No error message is generated at all. The user attempts to add the room as a meeting resource and it simply doesnt register with the resources calendar. I have tested the resource with other accounts and other rooms for this user but it is only this room that is having issues for this user. This Exchange server was migrated from 2007 to 2010. All rooms except for the problem room were created back on Exchange 2007. This new room was created with Exchange 2010. Maybe this clue has something to do with the problem? This is a paste of the mailboxes folder permissions [PS] C:\Windows\system32Get-MailboxFolderPermission -identity room125:\calendar RunspaceId : 448ab44a-4c2a-4403-b46c-fce1106c0823 FolderName : Calendar User : Default AccessRights : {Author} Identity : Default IsValid : True RunspaceId : 448ab44a-4c2a-4403-b46c-fce1106c0823 FolderName : Calendar User : Anonymous AccessRights : {None} Identity : Anonymous IsValid : True

    Read the article

  • Unable to get ls to recognise LS_OPTIONS or LS_COLORS?

    - by A T
    Trying to get --color=auto as the default ls argument. $ ls --version ls (GNU coreutils) 8.21 … $ echo $LS_COLORS no=00:fi=00:di=00;34:ln=01;36:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:ex=00;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.gz=01;31:*.bz2=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.avi=01;35:*.fli=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.ogg=01;35:*.mp3=01;35:*.wav=01;35: $ echo $LS_OPTIONS --color=auto Unfortunately when I run ls I still get non-colored output (running ls --color=auto manually gives me colors). How do I make --color=auto a default ls argument?

    Read the article

  • Database Web Service using Toplink DB Provider

    - by Vishal Jain
    With JDeveloper 11gR2 you can now create database based web services using JAX-WS Provider. The key differences between this and the already existing PL/SQL Web Services support is:Based on JAX-WS ProviderSupports SQL Queries for creating Web ServicesSupports Table CRUD OperationsThis is present as a new option in the New Gallery under 'Web Services'When you invoke the New Gallery option, it present you with three options to choose from:In this entry I will explain the options of creating service based on SQL queries and Table CRUD operations.SQL Query based Service When you select this option, on 'Next' page it asks you for the DB Conn details. You can also choose if you want SOAP 1.1 or 1.2 format. For this example, I will proceed with SOAP 1.1, the default option.On the Next page, you can give the SQL query. The wizard support Bind Variables, so you can parametrize your queries. Give "?" as a input parameter you want to give at runtime, and the "Bind Variables" button will get enabled. Here you can specify the name and type of the variable.Finish the wizard. Now you can test your service in Analyzer:See that the bind variable specified comes as a input parameter in the Analyzer Input Form:CRUD OperationsFor this, At Step 2 of Wizard, select the radio button "Generate Table CRUD Service Provider"At the next step, select the DB Connection and the table for which you want to generate the default set of operations:Finish the Wizard. Now, run the service in Analyzer for a quick check.See that all the basic operations are exposed:

    Read the article

  • Ubuntu 12.10 Clock is wrong

    - by mardavi
    I have an issue with Ubuntu Quantal, as it shows the wrong time. It is completely messy, the right time from time.is now is 09.43 and my clock shows 17.48. I am using ntp service and I already checked the timezone and it is correct. I also checked the hardware clock through sudo hwclock --showsudo dpkg-reconfigure tzdata and this is right too. I also tried sudo dpkg-reconfigure tzdata but with bad luck. What else can I try? As asked, here my /etc/ntp.conf # /etc/ntp.conf, configuration for ntpd; see ntp.conf(5) for help driftfile /var/lib/ntp/ntp.drift # Enable this if you want statistics to be logged. #statsdir /var/log/ntpstats/ statistics loopstats peerstats clockstats filegen loopstats file loopstats type day enable filegen peerstats file peerstats type day enable filegen clockstats file clockstats type day enable # Specify one or more NTP servers. # Use servers from the NTP Pool Project. Approved by Ubuntu Technical Board # on 2011-02-08 (LP: #104525). See http://www.pool.ntp.org/join.html for # more information. server 0.ubuntu.pool.ntp.org server 1.ubuntu.pool.ntp.org server 2.ubuntu.pool.ntp.org server 3.ubuntu.pool.ntp.org server time.nist.gov # Use Ubuntu's ntp server as a fallback. server ntp.ubuntu.com # Access control configuration; see /usr/share/doc/ntp-doc/html/accopt.html for # details. The web page <http://support.ntp.org/bin/view/Support/AccessRestrictions> # might also be helpful. # # Note that "restrict" applies to both servers and clients, so a configuration # that might be intended to block requests from certain clients could also end # up blocking replies from your own upstream servers. # By default, exchange time with everybody, but don't allow configuration. restrict -4 default kod notrap nomodify nopeer noquery restrict -6 default kod notrap nomodify nopeer noquery # Local users may interrogate the ntp server more closely. restrict 127.0.0.1 restrict ::1 # Clients from this (example!) subnet have unlimited access, but only if # cryptographically authenticated. #restrict 192.168.123.0 mask 255.255.255.0 notrust # If you want to provide time to your local subnet, change the next line. # (Again, the address is an example only.) #broadcast 192.168.123.255 # If you want to listen to time broadcasts on your local subnet, de-comment the # next lines. Please do this only if you trust everybody on the network! #disable auth #broadcastclient In addition, the ntp service was not running when I turned on my laptop today.

    Read the article

  • Persistence problem when installing USB Ubuntu variant using Windows

    - by Derek Redfern
    I'm part of a project called One2One2Go - we're developing a Live USB Ubuntu variant for use in schools in Somerville, MA. We have the project files compiled into an iso, and when installed using the native Ubuntu Startup Disk Creator, the USB works fine. When installed using a tool on a Windows machine (LiLi at linuxliveusb.com or liveusb-creator at fedorahosted.org/liveusb-creator), the USB works, but does not have persistence. This happens even if the creator is specifically set to allocate an area for persistent files. When comparing files on sticks created in Windows or Ubuntu, the one file that is different is syslinux/syslinux.cfg. I have printed the contents of the file below: Installed on Windows: DEFAULT nomodset LABEL debug menu label ^debug kernel /casper/vmlinuz append boot=casper xforcevesa initrd=/casper/initrd.gz -- LABEL nomodset menu label ^nomodset kernel /casper/vmlinuz append boot=casper quiet splash nomodset initrd=/casper/initrd.gz -- LABEL memtest menu label ^Memory test kernel /install/memtest append - LABEL hd menu label ^Boot from first hard disk localboot 0x80 append - PROMPT 0 TIMEOUT 1 Installed on Ubuntu: DEFAULT nomodset LABEL debug menu label ^debug kernel /casper/vmlinuz append noprompt cdrom-detect/try-usb=true persistent boot=casper xforcevesa initrd=/casper/initrd.gz -- LABEL nomodset menu label ^nomodset kernel /casper/vmlinuz append noprompt cdrom-detect/try-usb=true persistent boot=casper quiet splash nomodset initrd=/casper/initrd.gz -- LABEL memtest menu label ^Memory test kernel /install/memtest append - LABEL hd menu label ^Boot from first hard disk localboot 0x80 append - PROMPT 0 TIMEOUT 1 For troubleshooting reasons, I changed the Windows-created syslinux.cfg to match the one created by Ubuntu and persistence worked on it. I think the problem is that on the stick created by Windows, there is no "persistent" flag, but I don't know why. Is this a problem with our disk image or with the creator? How would I go about fixing this problem? Thanks in advance for your help. Derek Redfern

    Read the article

  • Windows Server 2008 - unable to bind any TCP port

    - by Kalphiter
    OS: Win Server 2008 RC2 Windows firewall on (no effect when off) I have suddenly been plagued by an issue in which I cannot find any similar ones with a search. I am running about 20 game servers that bind to a UDP port, then bind to a TCP port 1 above the UDP port. Suddenly, a day ago, new TCP binds stopped functioning. Now, I have confirmed that other applications cannot listen on most ports. For example, I have a java program that I made a copy of, and tried the following ports: 33001, 23789, 89... completely random ports. As far as the applications already that have TCP bindings, such as HTTP and MySQL, only port 8080 was one port I discovered could work, and only for Apache. If applications would leave their default port they could not bind, however they returned to normal when the port was default. I've checked for listening applications through netstat and curports, also checked for any connections on these ports, and they're completely free.

    Read the article

  • How to boot windows 8 in a dual boot along with windows 7?

    - by GoldDove
    I have installed a WIndows 8 evaluation about a week ago. Usually, it asks me every time I turn on my computer whether to boot into windows 8 or windows 7. The default was windows 8 after 30 seconds. I changed that just yesterday to be default windows 7 after 5 seconds. And after I changed the setting, I went ahead and went into windows 8 and did my work. Today, when I turned on my computer, it is failing to ask me which one to boot it in. It simply boots directly into Windows 7. Is there any reason for this? Can I no longer boot into Windows 8?

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >