Search Results

Search found 8384 results on 336 pages for 'lines'.

Page 229/336 | < Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >

  • Ubuntu 12.04 Automounting ntfs partition

    - by kuzyt
    Ive looked everywhere to fix this problem but I cant seem to figure out why its doing this. I have the following /etc/fstab entry to mount a ntfs partition using ntfs-3g. UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 The volume label for this partition is "MEDIA02" So I have had no problems with the fstab mounting. The problem however is that it automounts again using MEDIA02 label. I'm not sure automounting is the right term for this as its just an empty directory. Deleting this directory and rebooting is causing it to appear again. So listing /media I see both MEDIA02 & mediahd02 htpc@htpc:~$ cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sdf1 during installation UUID=ec027544-b0e7-4145-99a4-905543a9781a / ext4 errors=remount-ro,noatime,discard 0 1 # swap was on /dev/sdf5 during installation UUID=1794409e-723f-41ac-9f31-ae059f377613 none swap sw 0 0 # Added all the lines below this tmpfs /tmp tmpfs defaults,noatime,mode=1777 0 0 UUID=0F70-3B06 /media/mediahd01 vfat defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 UUID=01CD842715EC2180 /media/mediahd02 ntfs defaults,user,noexec,uid=1000,gid=1000,dmask=007,fmask=117 0 2 htpc@htpc:~$ cat /etc/mtab /dev/sdc1 / ext4 rw,noatime,errors=remount-ro,discard 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /tmp tmpfs rw,noatime,mode=1777 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sdc1 /media/usbhd-sdc1 ext4 rw,relatime 0 0 /dev/sdb1 /media/mediahd02 fuseblk rw,noexec,nosuid,nodev,allow_other,default_permissions,blksize=4096 0 0 /dev/sda5 /media/mediahd01 vfat rw,noexec,nosuid,nodev,uid=1000,gid=1000,dmask=007,fmask=117 0 0 /dev/sdh1 /media/Windows_7 fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 Can someone shed some light as to why its doing this ?

    Read the article

  • Programming Language, Turing Completeness and Turing Machine

    - by Amumu
    A programming language is said to be Turing Completeness if it can successfully simulate a universal TM. Let's take functional programming language for example. In functional programming, function has highest priority over anything. You can pass functions around like any primitives or objects. This is called first class function. In functional programming, your function does not produce side effect i.e. output strings onto screen, change the state of variables outside of its scope. Each function has a copy of its own objects if the objects are passed from the outside, and the copied objects are returned once the function finishes its job. Each function written purely in functional style is completely independent to anything outside of it. Thus, the complexity of the overall system is reduced. This is referred as referential transparency. In functional programming, each function can have its local variables kept its values even after the function exits. This is done by the garbage collector. The value can be reused the next time the function is called again. This is called memoization. A function usually should solve only one thing. It should model only one algorithm to answer a problem. Do you think that a function in a functional language with above properties simulate a Turing Machines? Functions (= algorithms = Turing Machines) are able to be passed around as input and returned as output. TM also accepts and simulate other TMs Memoization models the set of states of a Turing Machine. The memorized variables can be used to determine states of a TM (i.e. which lines to execute, what behavior should it take in a give state ...). Also, you can use memoization to simulate your internal tape storage. In language like C/C++, when a function exits, you lose all of its internal data (unless you store it elsewhere outside of its scope). The set of symbols are the set of all strings in a programming language, which is the higher level and human-readable version of machine code (opcode) Start state is the beginning of the function. However, with memoization, start state can be determined by memoization or if you want, switch/if-else statement in imperative programming language. But then, you can't Final accepting state when the function returns a value, or rejects if an exception happens. Thus, the function (= algorithm = TM) is decidable. Otherwise, it's undecidable. I'm not sure about this. What do you think? Is my thinking true on all of this? The reason I bring function in functional programming because I think it's closer to the idea of TM. What experience with other programming languages do you have which make you feel the idea of TM and the ideas of Computer Science in general? Can you specify how you think?

    Read the article

  • Swiss Re increases data warehouse performance and deploys in record time

    - by KLaker
    Great information on yet another data warehouse deployment on Exadata. A little background on Swiss Re: In 2002, Swiss Re established a data warehouse for its client markets and products to gather reinsurance information across all organizational units into an integrated structure. The data warehouse provided the basis for reporting at the group level with drill-down capability to individual contracts, while facilitating application integration and data exchange by using common data standards. Initially focusing on property and casualty reinsurance information only, it now includes life and health reinsurance, insurance, and nonlife insurance information. Key highlights of the benefits that Swiss Re achieved by using Exadata: Reduced the time to feed the data warehouse and generate data marts by 58% Reduced average runtime by 24% for standard reports comfortably loading two data warehouse refreshes per day with incremental feeds Freed up technical experts by significantly minimizing time spent on tuning activities Most importantly this was one of the fastest project deployments in Swiss Re's history. They went from installation to production in just four months! What is truly surprising is the that it only took two weeks between power-on to testing the machine with full data volumes! Business teams at Swiss Re are now able to fully exploit up-to-date analytics across property, casualty, life, health insurance, and reinsurance lines to identify successful products. These points are highlighted in the following quotes from Dr. Stephan Gutzwiller, Head of Data Warehouse Services at Swiss Re:  "We were operating a complete Oracle stack, including servers, storage area network, operating systems, and databases that was well optimized and delivered very good performance over an extended period of time. When a hardware replacement was scheduled for 2012, Oracle Exadata was a natural choice—and the performance increase was impressive. It enabled us to deliver analytics to our internal customers faster, without hiring more IT staff" “The high quality data that is readily available with Oracle Exadata gives us the insight and agility we need to cater to client needs. We also can continue re-engineering to keep up with the increasing demand without having to grow the organization. This combination creates excellent business value.” Our full press release is available here: http://www.oracle.com/us/corporate/customers/customersearch/swiss-re-1-exadata-ss-2050409.html. If you want more information about how Exadata can increase the performance of your data warehouse visit our home page: http://www.oracle.com/us/products/database/exadata-database-machine/overview/index.html

    Read the article

  • How is programming affected by spatial aptitude?

    - by natli
    The longer I work on a project, the less clear it becomes. It's like I cannot seperate various classes/objects anymore in my head. Everything starts mixing up, and it's extremely hard to take it all apart again. I start putting functions in classes where they really don't belong, and make silly mistakes such as writing code that I later find was 100% obsolete; things are no longer clearly mappable in my head. It isn't until I take a step back for several hours (or days somtimes!) that I can actually see what's going on again, and be productive. I usually try to fight through this, I am so passionate about coding that I wouldn't for the life of me know what else I could be doing. This is when stuff can get really weird, I get so up in my head that I sort of lose touch with reality (to some extent) in that various actions, such as pouring a glass of water, no longer happen on a concious level. It happens on auto pilot, during which pretty much all of my concious concentration (is that even a thing?) is devoted to borderline pointless problem solving (trying to seperate elements of code). It feels like a losing battle. So I took an IQ test a while ago (Wechsler Adult Intelligence Scale I believe it was) and it turned out my Spatial Aptitude was quite low. I still got a decent score, just above average, so I won't have to poke things with a stick for a living, but I am a little worried that this is such a handicap when writing/engineering computer programs that I won't ever be able to do it seriously or professionally. I am very much interested in what other people think of this.. could a low spatial aptitude be the cause of the above described problems? Maybe I should be looking more along the lines of ADD or something similar, because I did get diagnosed with ADD at the age of 17 (5 years ago) but the medicine I received didn't seem to affect me that much so I never took it all that serious. Sorry if I got a little off topic there, I know this is not a mental help board, the question should be clear; How is programming affected by spatial aptitude? As far as I know people are born with low/med/high spatial aptitude, so I think it's interesting to find out if the more fortunate are better programmers by birth right.

    Read the article

  • What is the correct way to use g_signal_connect() in C++ for dynamic unity quicklists?

    - by hakermania
    I want to make my application use dynamic unity quicklists. For building my application I am using C++ and the QtCreator IDE. When a menu action is triggered I want to be able to have access to a non-static function of my MainWindow class so as to be able to update the Graphical User Interface which can be accessed from inside 'normal' MainWindow's functions. So, I am building up my quicklist like this (mainwindow.cpp): void MainWindow::enable_unity_quicklist(){ Unity_Menu = dbusmenu_menuitem_new(); dbusmenu_menuitem_property_set_bool (Unity_Menu, DBUSMENU_MENUITEM_PROP_VISIBLE, FALSE); Unity_Stop = dbusmenu_menuitem_new(); dbusmenu_menuitem_property_set(Unity_Stop, DBUSMENU_MENUITEM_PROP_LABEL, "Stop"); dbusmenu_menuitem_child_append (Unity_Menu, Unity_Stop); g_signal_connect (Unity_Stop, DBUSMENU_MENUITEM_SIGNAL_ITEM_ACTIVATED, G_CALLBACK(&fake_callback), (gpointer)this); if(!unity_entry) unity_entry = unity_launcher_entry_get_for_desktop_id("myapp.desktop"); unity_launcher_entry_set_quicklist(unity_entry, Unity_Menu); dbusmenu_menuitem_property_set_bool(Unity_Menu, DBUSMENU_MENUITEM_PROP_VISIBLE, true); dbusmenu_menuitem_property_set_bool(Unity_Stop, DBUSMENU_MENUITEM_PROP_VISIBLE, true); } void MainWindow::fake_callback(gpointer data){ MainWindow* m = (MainWindow*)data; m->on_stopButton_clicked(); } void MainWindow::on_stopButton_clicked(){ //stopping the process... } mainwindow.h: private slots: void enable_unity_quicklist(); void on_stopButton_clicked(); public slots: static void fake_callback(gpointer data); This suggestion was taken from http://old.nabble.com/Using-g_signal_connect-in-class-td18461823.html The program crashes immediately after I choose the 'Stop' action from the Unity Quicklist. Debugging the program shows that I am not able to access anything MainWindow related inside the on_stopButton_clicked() without crashing. For example, it crashes when doing this check (which is the first 2 lines of code inside this function): if (!ui->stopButton->isEnabled()) return; I have also tested lots of other things that I found at the internet, but nothing of them worked. One interesting solution would be to use gtkmm (http://developer.gnome.org/gtkmm-tutorial/stable/sec-connecting-signal-handlers.html.en) but I am not used at all working on GTK applications (I work solely in Qt) and I don't know if this even suits to my occasion. A compilable example indicating what the problem is can be found at: http://ubuntuone.com/7iKA3wnPmWVp8YNNDLlVQI (3.2Kb) If you are not familiar with the QtCreator IDE, you can compile with the following commands, as long as you have all the needed libraries: cd dynamic_unity_quicklists_test; qmake -project; qmake; make

    Read the article

  • SEO to ensure visibility for a narrow, non-competitive, non-commercial site

    - by hen3ry
    I'm webmaster of a non-commercial site in English. A non-native-English speaker asked me why our site doesn't produce hits in Google searches she conducts for relevant keywords in her native language. I asked her for a list of keywords in her native language, and I naively tried inserting those into the META info in the page headers and waited a couple of weeks. No help. A little searching informed me that Google doesn't use the META info, and has not done so for a very long time. D'oh! To be entirely concrete, suppose the StackExchange folks want Russian speakers to find this site, Pro Webmasters. The direct translation in Russian of "webmaster" --thanks, Google Translator-- is: "?????????". (Not sure this will render properly, but that's not essential to my question.) Assuming Pro Webmasters has a common template for all pages it generates, inserting "?????????" into the Keywords META for that template won't help, it seems. What could StackExchange do to make this site visible to users searching with the Russian keyword "?????????" ? Pretty much all the advice I've seen boils down to this, if I understand correctly: use the desired search term often (but not too often) among site content, and the problem will be solved. That's great, but I don't think sprinkling "?????????" visibly all over Pro Webmasters is going to fly. Just for completeness, crawlers must be long immune to the invisible-to-visitors scheme, e.g, format "?????????" in a tiny text size in a color the same as an existing background, e.g. white-over-white. Or, put that text inside a div styled: ' style="visibility: hidden" '. Probably some other equivalents. I can only think of one slightly effective method, along these lines: place an unobtrusive link on the common template to a page titled "for international users" , and on that page list desired synonyms for "webmaster" in various languages on that page. A test case --admittedly, just one-- using my site implies that a Google search for "international users" ????????? will produce a hit for this page, and thus make the site minimally visible, despite the fact that the page will almost never be visited. At the moment, anyway. Note: All the SEO discussions I have found so far are about competitive and --almost certainly-- commercial sites. To repeat: my site is non-commercial, and it is about an obscure, narrow topic that is of interest to only a small number of people worldwide. This isn't about clawing our way to the top of competitive rankings, just making this content minimally visible to interested non-native-English speakers. Ideas? TIA

    Read the article

  • how do I set quad buffering with jogl 2.0

    - by tony danza
    I'm trying to create a 3d renderer for stereo vision with quad buffering with Processing/Java. The hardware I'm using is ready for this so that's not the problem. I had a stereo.jar library in jogl 1.0 working for Processing 1.5, but now I have to use Processing 2.0 and jogl 2.0 therefore I have to adapt the library. Some things are changed in the source code of Jogl and Processing and I'm having a hard time trying to figure out how to tell Processing I want to use quad buffering. Here's the previous code: public class Theatre extends PGraphicsOpenGL{ protected void allocate() { if (context == null) { // If OpenGL 2X or 4X smoothing is enabled, setup caps object for them GLCapabilities capabilities = new GLCapabilities(); // Starting in release 0158, OpenGL smoothing is always enabled if (!hints[DISABLE_OPENGL_2X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(2); } else if (hints[ENABLE_OPENGL_4X_SMOOTH]) { capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); } capabilities.setStereo(true); // get a rendering surface and a context for this canvas GLDrawableFactory factory = GLDrawableFactory.getFactory(); drawable = factory.getGLDrawable(parent, capabilities, null); context = drawable.createContext(null); // need to get proper opengl context since will be needed below gl = context.getGL(); // Flag defaults to be reset on the next trip into beginDraw(). settingsInited = false; } else { // The following three lines are a fix for Bug #1176 // http://dev.processing.org/bugs/show_bug.cgi?id=1176 context.destroy(); context = drawable.createContext(null); gl = context.getGL(); reapplySettings(); } } } This was the renderer of the old library. In order to use it, I needed to do size(100, 100, "stereo.Theatre"). Now I'm trying to do the stereo directly in my Processing sketch. Here's what I'm trying: PGraphicsOpenGL pg = ((PGraphicsOpenGL)g); pgl = pg.beginPGL(); gl = pgl.gl; glu = pg.pgl.glu; gl2 = pgl.gl.getGL2(); GLProfile profile = GLProfile.get(GLProfile.GL2); GLCapabilities capabilities = new GLCapabilities(profile); capabilities.setSampleBuffers(true); capabilities.setNumSamples(4); capabilities.setStereo(true); GLDrawableFactory factory = GLDrawableFactory.getFactory(profile); If I go on, I should do something like this: drawable = factory.getGLDrawable(parent, capabilities, null); but drawable isn't a field anymore and I can't find a way to do it. How do I set quad buffering? If I try this: gl2.glDrawBuffer(GL.GL_BACK_RIGHT); it obviously doesn't work :/ Thanks.

    Read the article

  • Who Are the BI Users in Your Neighborhood?

    - by [email protected]
    By Brian Dayton on March 19, 2010 10:52 PM Forrester's Boris Evelson recently wrote a blog titled "Who are the BI Personas?" that I enjoyed for a number of reasons. It's a quick read, easy to grasp and (refreshingly) focuses on the users of technology VS the technology. As Evelson admits, he meant to keep the reference chart at a high-level because there are too many different permutations and additional sub-categories to make such a chart useful. For me, I wouldn't head into the technical permutations but more the contextual use of BI and the issues that users experience. My thoughts brought up more questions than answers such as: Context: - HOW: With the exception of the "Power User" persona--likely some sort of business or operations analyst? - WHEN: Are they using the information to make real-time decisions on the front lines (a customer service manager or shipping/logistics VP) or are they using this information for cumulative analysis and business planning? Or both? - WHERE: What areas of the business are more or less likely to rely on BI across an organization? Human Resources, Operations, Facilities, Finance--- and why are some more prone to use data-driven analysis than others? Issues: - DELAYS & DRAG ON IT?: One of the persona characteristics Evelson calls out is a reliance on IT. Every persona except for the "Power User" has a heavy reliance on IT for support. What business issues or delays does that cause to users? What is the drag on IT resources who could potentially be creating instead of reporting? - HOW MANY CLICKS: If BI is being used within the context of a transaction (sales manager looking for upsell opportunities as an example) is that person getting the information within the context of that action or transaction? Or are they minimizing screens, logging into another application or reporting tool, running queries, etc.? Who are the BI Users in your neighborhood or line of business? Do Evelson's personas resonate--and do the tools that he calls out (he refers to it as "BI Style") resonate with what your personas have or need? Finally, I'm very interested if BI use is viewed as a bolt-on...or an integrated part of your daily enterprise processes?

    Read the article

  • The only metric with any value

    - by Malcolm Anderson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} There's a lot of talk in the Scrum world about metrics. What's the velocity? How big is a story point?  How many story points is that team producing per man hour?   People are sadly missing the whole point.  Take your measurements up a level or two.  When you get down to it, the only metric that makes any difference, is ROI.   The problem is that often times, the developers work in a dark hole, far removed from the realities of how exactly they get paid.  A bigger problem is that mid-level managers tend to be further removed from the realities of ROI.  A lot of times mid-level managers get tasked with tracking their teams "productivity" using things like, "lines of code", or "completeness of the productivity reports."   Monetize your projects and then track your velocity against business value (real dollars).    When your development teams can say, "Last year, our team cost the business 2 million dollars and we know that because of our efforts, the company saved 2 million dollars in waste and increased revenues by another 4 million dollars." At that point you have just moved your development team from a cost center, to a profit center.  You might have to give them a raise, but they have demonstrated that they have earned it.

    Read the article

  • When a problem is resolved

    - by Rob Farley
    This month’s T-SQL Tuesday is hosted by Jen McCown, and she’s picked the topic of Resolutions. It’s a new year, and she’s thinking about what people have resolved to do this year. Unfortunately, I’ve never really done resolutions like that. I see too many people resolve to quit smoking, or lose weight, or whatever, and fail miserably. I’m not saying I don’t set goals, but it’s not a thing for New Year. The obvious joke is “1920x1080” as a resolution, but I’m not going there. I think Resolving is a strange word. It makes it sound like I’m having to solve a problem a second time, when actually, it’s more along the lines of solving a problem well enough for it to count as finished. If something has been resolved, a solution has been provided. There is a resolution, through the provision of a solution. It’s a strangeness of English. When I look up the word resolution at dictionary.com, it has 12 options, including “settling of a problem”. There’s a finality about resolution. If you resolve to do something, you’re saying “Yes. This is a done thing. I’m resolving to do it, which means that it may as well be complete already.” I like to think I resolve problems, rather than just solving them. I want my solving to be final and complete. If I tune a query, I don’t want to find that I’m back in there, re-tuning it at some point. Strangely, if I re-solve a problem, that implies that I didn’t resolve it in the first place. I only solved it. Temporarily. We “data-folk” live in a world where the most common answer is “It depends.” Frustratingly, the thing an answer depends on may still be changing in the system in question. That probably means that any solution that is put in place may need reinvestigating at some point later. So do I resolve things? Yes. Am I Chuck Norris, and solve things so well the world would break first? No. Do these two claims happily sit beside each other? No, unfortunately not. But I happily take responsibility for things, and let my clients depend on me to see it through. As far as they are concerned, it is resolved. And so I resolve to keep resolving, right through 2011.

    Read the article

  • Legality of similar games

    - by Jamie Taylor
    This is my first question on GD.SE, and I hope it's in the right place. A little background: I'm an amateur (read: not explicitly employed to develop games, but am employed as a software developer) game developer and took a ComSci with Games Development degree. My Question: What is the legal situation/standpoint of creating a copycat title? I know that there are only N number of ways of solving a problem, and N number of ways to design a piece of software. Say that an independent developer designed a copycat game (a Tetris clone in this example) for instance, and decided to use that game to generate income for themselves as well as interest for their other products. Say the developer adds a disclaimer into the software along the lines of "based on , originally released c. by ." Are there any legal problems/grey areas with the developer in this example releasing this game, commercially? Would they run into legal problems? Should the developer in this example expect cease and desist orders or law suit claims from original publishers? Have original publishers been known to, effectively, kill independent projects because they are a little too close to older titles? I know that there was, at least, one attempt by a group of independent developers to remake Sonic the Hedgehog 2 and Sega shut them down. I also know of Sega shutting down development of the independent Streets of Rage Remake. I know that "but it's an old game, your honour," isn't a great legal standpoint when it comes to defending yourself. But, could an independent developer have a law suit filed against them for re-implementing an older title in a new way? I know that there are a lot of copycat versions of the older titles like Tetris available on app stores (and similar services), and that it would be very difficult for a major publisher to shut them all down. Regardless of this, is making a Tetris (or other game) copycat/clone illegal? We were taught lots of different things at University, but we never covered copyright law. I'm presuming that their thought behind it was "IF these students get jobs in games development, they wont need to know anything about the legal side of it, because their employers will have legal departments... presumably" tl;dr Is it illegal to create a clone or copycat of an old title, and make money from it?

    Read the article

  • Randomly generate directed graph on a grid

    - by Talon876
    I am trying to randomly generate a directed graph for the purpose of making a puzzle game similar to the ice sliding puzzles from Pokemon. This is essentially what I want to be able to randomly generate: http://bulbanews.bulbagarden.net/wiki/Crunching_the_numbers:_Graph_theory. I need to be able to limit the size of the graph in an x and y dimension. In the example given in the link, it would be restricted to an 8x4 grid. The problem I am running into is not randomly generating the graph, but randomly generating a graph, which I can properly map out in a 2d space, since I need something (like a rock) on the opposite side of a node, to make it visually make sense when you stop sliding. The problem with this is that sometimes the rock ends up in the path between two other nodes or possibly on another node itself, which causes the entire graph to become broken. After discussing the problem with a few people I know, we came to a couple of conclusions that may lead to a solution. Including the obstacles in the grid as part of the graph when constructing it. Start out with a fully filled grid and just draw a random path and delete out blocks that will make that path work. The problem then becomes figuring out which ones to delete to avoid introducing an additional, shorter path. We were also thinking a dynamic programming algorithm may be beneficial, though none of us are too skilled with creating dynamic programming algorithms from nothing. Any ideas or references about what this problem is officially called (if it's an official graph problem) would be most helpful. Here are some examples of what I have accomplished so far by just randomly placing blocks and generating the navigation graph from the chosen start/finish. The idea (as described in the previous link) is you start at the green S and want to get to the green F. You do this by moving up/down/left/right and you continue moving in the direction chosen until you hit a wall. In these pictures, grey is a wall, white is the floor, and the purple line is the minimum length from start to finish, and the black lines and grey dots represented possible paths. Here are some bad examples of randomly generated graphs: http://i.stack.imgur.com/9uaM6.png Here are some good examples of randomly generated (or hand tweaked) graphs: i.stack.imgur.com/uUGeL.png (can't post another link, sorry) I've also seemed to notice the more challenging ones when actually playing this as a puzzle are ones which have lots of high degree nodes along the minimum path.

    Read the article

  • PeopleSoft Grants & the Federal Agency Letter of Credit Draw Changes

    - by Mark Rosenberg
    For decades, most, if not all, US Federal agencies that sponsor research allowed grant recipients to request and receive payments using pooled accounts, commonly known as pooled letter of credit (LOC) draws. This enabled organizations, such as universities and hospitals, fast and efficient access to reimbursement of the expenditures they incurred conducting research across a portfolio of grants. To support this business practice, the PeopleSoft Grants solution has delivered an LOC Draw report to provide the total request amount along with all of the supporting invoice details for reconciliation and audit purposes. Now, in an attempt to provide greater transparency, eliminate fraud, strengthen accountability for grant-related financial transactions, and simplify grant award closeout, many US Federal sponsors are transitioning from the “pooling” letter of credit draw method to requesting on a “grant-by-grant” basis. The National Science Foundation, the second largest issuer of Federal awards, already transitioned to detailed grant draws in 2013. And, in response to the U.S. Department of Health and Human Services (HHS) directive to HHS-supported Agencies, the largest Federal awards sponsor, the National Institutes of Health (NIH), will fully transition to the new HHS subaccount draw method. This will require NIH award recipients to request payments based on actual expenses incurred on an award-by-award basis. NIH is expected to fully transition to this new draw method by the end of Federal fiscal year 2015.  (The NIH had planned to fully transition to this new method by the end of fiscal 2014; however, the impact to institutions was deemed to be significant enough that a reprieve was recently granted.) In light of these new Federal draw requirements, we have recently released these new features to aid our customers on both PeopleSoft Grants releases 9.1 and 9.2:1. Federal Award Identification Number on the Proposal and Award Profile 2. Letter of credit fields on contract lines to support award basis draws and comply with Federal close out mandates3. Process to produce both pro forma and final LOC Draw Reports in BI Publisher report format4. Subacccount ID field on the LOC Summary and a new BI Publisher version of the LOC Summary report 5. Added Subaccount Field and contract info to be displayed on the LOC summary page6. Ability to generate by a variety of dimensions pro forma and invoiced draw listings 7. Queries for generation and manipulation of data to upload into sponsor payment request systems and perform payment matching8. Contracts LOC Close Out query to quickly review final balances prior to initiating final draws and preparing Federal Financial Reports prior to close The PeopleSoft Development team actively monitors this and other major Federal changes and continues working closely with the Grants Product Advisory Group of the Higher Education User Group to ensure a clear understanding of what our customers need in order to transition to new approaches for doing business with the Federal government. For more information regarding the enhancements to the PeopleSoft Grants solution, existing customers can login to My Oracle Support and review the Enhancements to Letter of Credit Process (Doc ID 1912692.1) associated with resolution ID 904830. This enhanced LOC functionality is available in both PeopleSoft FSCM 9.1 Bundle #31 and PeopleSoft FSCM 9.2 Update Image 8.

    Read the article

  • Compiling for T4

    - by Darryl Gove
    I've recently had quite a few queries about compiling for T4 based systems. So it's probably a good time to review what I consider to be the best practices. Always use the latest compiler. Being in the compiler team, this is bound to be something I'd recommend But the serious points are that (a) Every release the tools get better and better, so you are going to be much more effective using the latest release (b) Every release we improve the generated code, so you will see things get better (c) Old releases cannot know about new hardware. Always use optimisation. You should use at least -O to get some amount of optimisation. -xO4 is typically even better as this will add within-file inlining. Always generate debug information, using -g. This allows the tools to attribute information to lines of source. This is particularly important when profiling an application. The default target of -xtarget=generic is often sufficient. This setting is designed to produce a binary that runs well across all supported platforms. If the binary is going to be deployed on only a subset of architectures, then it is possible to produce a binary that only uses the instructions supported on these architectures, which may lead to some performance gains. I've previously discussed which chips support which architectures, and I'd recommend that you take a look at the chart that goes with the discussion. Crossfile optimisation (-xipo) can be very useful - particularly when the hot source code is distributed across multiple source files. If you're allowed to have something as geeky as favourite compiler optimisations, then this is mine! Profile feedback (-xprofile=[collect: | use:]) will help the compiler make the best code layout decisions, and is particularly effective with crossfile optimisations. But what makes this optimisation really useful is that codes that are dominated by branch instructions don't typically improve much with "traditional" compiler optimisation, but often do respond well to being built with profile feedback. The macro flag -fast aims to provide a one-stop "give me a fast application" flag. This usually gives a best performing binary, but with a few caveats. It assumes the build platform is also the deployment platform, it enables floating point optimisations, and it makes some relatively weak assumptions about pointer aliasing. It's worth investigating. SPARC64 processor, T3, and T4 implement floating point multiply accumulate instructions. These can substantially improve floating point performance. To generate them the compiler needs the flag -fma=fused and also needs an architecture that supports the instruction (at least -xarch=sparcfmaf). The most critical advise is that anyone doing performance work should profile their application. I cannot overstate how important it is to look at where the time is going in order to determine what can be done to improve it. I also presented at Oracle OpenWorld on this topic, so it might be helpful to review those slides.

    Read the article

  • What kind of position matches my skills, experience and interests? [closed]

    - by Ryan
    I work in a large firm and my current job covers a variety of different duties. Due to several factors I am seriously considering finding a new job (hours, pay-cut, limited career growth). I have worked for the company nearly 4 years and almost 2 years ago I transitioned into more of a business analyst role (previously I was working in a client facing role for our audit group). In this role I have overseen all aspects of the development of a large scale re-platforming of our firm's key tool in analyzing investment portfolios. I gathered requirements, wrote specs, designed the UI and functionality, worked closely with developers (onshore and offshore) to see to it the implementation was correct, managed schedules and was the lead tester. This is a large scale system used by thousands of people around the world. I've also written Excel macros, reports in SQL, given trainings, written technical manuals, interfaced with senior managers and partners, etc. I've been on a couple interviews sporadically, most of which were for positions aimed at higher management consulting type positions, dealing with strategy, overall process management, project management, etc. What really interests me is the technical stuff and overseeing a project from beginning to end (although I would rather not have to do so many of the tasks on my own). I genuinely like a lot of what I do, but the company culture and attitude towards overworking people combined with my recent pay-cut (my overtime was cut due to a promotion to a higher level) has lead me to want to seek work elsewhere. The problem is - what type of work could I realistically do? I feel like traditional business analysis is too much business and not enough tech stuff, and I've really taken a shine lately to beefing up my programming abilities and creating small programs to automate things around work. I also feel that because my actual years of experience as a business analyst (figure 1.5 years realistically) puts me at a junior level doing a lot of grunt requirements gathering, when the work that I have been doing with my current company is more in line with what a Program Manager does (depending on your definition I guess). So in reality, when I'm job hunting I get a bit perplexed because I feel like the traditional BA stuff wouldn't really suit me, and even if it did it's usually something along the lines of 5-10 years experience for the type of work that is similar to what I've done (and I've also found most BA jobs to be contract only which at the moment I'm not too keen on). Program Manager is something that interests me, but again I feel like the experience is lacking because that's a much more senior position. Am I in some kind of career no-man's land? Any idea what would best suit me given my experience and abilities, as well as my interests? I plan to keep learning programming on the side, but don't expect to get a job being a straight programmer given my relative inexperience with programming.

    Read the article

  • Cron job running successfully suddenly reports script is not found

    - by Ted B
    What might cause cron to suddenly report a file it is supposed to run is "not found," when the file hasn't been touched, and in fact, the entire system hasn't been touched since it last ran successfully? I have a cronjob schedule I define by doing sudo crontab -e In it, I have dozens of cron jobs that run successfully.. I do not have a PATH specified, and I use absolute paths to call all my scheduled scripts, setting the PATH in them as needed. I do not specify a SHELL in the crontab. All scripts identify the shell as their first line. Without me touching the system, a particular job defined in the middle of other jobs will suddenly stop running. To debug this, I added an output redirection to a log file. In that, the output clearly shows the output of the script successfully running time after time for weeks, and then suddenly the following appears: /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found /bin/sh: /home/iupress/bin/sync-email_images: not found If I do the ls command, copying and pasting that exact path from the error message, it clearly reports the file is still there (no surprise). Yet the log continues to report that file is "not found" until I take action. I can run the script manually and it runs just fine. If I do sudo crontab -e and save the file, the job runs on the next scheduled time, putting its output in the log, no longer reporting the script is "not found". It seems to me the contents of the script trying to be run are irrelevant since cron doesn't even process the file because it is "not found". I have a job scheduled below the one that is encountering this problem that I know is continuing to run, because I have its output mailed to me. So I know cron is running and continues to run at least one other job, even after it suddenly reports this job's script is "not found". All my lines end with a newline. I had no periods in the crontab until I added the redirection to a log file. I have now added a PATH specification, but left the absolute paths in the jobs. Unfortunately, I have no idea if and when this problem will occur. It will likely be weeks from now. By the way, I am running a script to syncronize the clock, and I see the time is exactly what it should be.

    Read the article

  • Techniques to re-factor garbage and maintain sanity?

    - by Incognito
    So I'm sitting down to a nice bowl of c# spaghetti, and need to add something or remove something... but I have challenges everywhere from functions passing arguments that doesn't make sense, someone who doesn't understand data structures abusing strings, redundant variables, some comments are red-hearings, internationalization is on a per-every-output-level, SQL doesn't use any kind of DBAL, database connections are left open everywhere... Are there any tools or techniques I can use to at least keep track of the "functional integrity" of the code (meaning my "improvements" don't break it), or a resource online with common "bad patterns" that explains a good way to transition code? I'm basically looking for a guidebook on how to spin straw into gold. Here's some samples from the same 500 line function: protected void DoSave(bool cIsPostBack) { //ALWAYS a cPostBack cIsPostBack = true; SetPostBack("1"); string inCreate ="~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"; parseValues = new string []{"","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","",""}; if (!cIsPostBack) { //....... //.... //.... if (!cIsPostBack) { } else { } //.... //.... strHPhone = StringFormat(s1.Trim()); s1 = parseValues[18].Replace(encStr," "); strWPhone = StringFormat(s1.Trim()); s1 = parseValues[11].Replace(encStr," "); strWExt = StringFormat(s1.Trim()); s1 = parseValues[21].Replace(encStr," "); strMPhone = StringFormat(s1.Trim()); s1 = parseValues[19].Replace(encStr," "); //(hundreds of lines of this) //.... //.... SQL = "...... lots of SQL .... "; SqlCommand curCommand; curCommand = new SqlCommand(); curCommand.Connection = conn1; curCommand.CommandText = SQL; try { curCommand.ExecuteNonQuery(); } catch {} //.... } I've never had to refactor something like this before, and I want to know if there's something like a guidebook or knowledgebase on how to do this sort of thing, finding common bad patterns and offering the best solutions to repair them. I don't want to just nuke it from orbit,

    Read the article

  • Image first loaded, then it isn't? (XNA)

    - by M0rgenstern
    I am very confused at the Moment. I have the following Class: (Just a part of the class): public class GUIWindow { #region Static Fields //The standard image for windows. public static IngameImage StandardBackgroundImage; #endregion } IngameImage is just one of my own classes, but actually it contains a Texture2D (and some other things). In another class I load a list of GUIButtons by deserializing a XML file. public static GUI Initializazion(string pXMLPath, ContentManager pConMan) { GUI myGUI = pConMan.Load<GUI>(pXMLPath); GUIWindow.StandardBackgroundImage = new IngameImage(pConMan.Load<Texture2D>(myGUI.WindowStandardBackgroundImagePath), Vector2.Zero, 1024, 600, 1, 0, Color.White, 1.0f, true, false, false); System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); myGUI.Windows = pConMan.Load<List<GUIWindow>>(myGUI.GUIFormatXMLPath); System.Console.WriteLine("Windows loaded"); return myGUI; } Here this line: System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Prints "true". To load the GUIWindows I need an "empty" constructor, which looks like that: public GUIWindow() { Name = ""; Buttons = new List<Button>(); ImagePath = ""; System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); //Image = new IngameImage(StandardBackgroundImage); //System.Console.WriteLine( //Image.IsActive = false; SelectedButton = null; IsActive = false; } As you can see, I commented lines out in the constructor. Because: Otherwise this would crash. Here the line System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Doesn't print anything, it just crashes with the following errormessage: Building content threw NullReferenceException: Object reference not set to an object instance. Why does this happen? Before the program wants to load the List, it prints "true". But in the constructor, so in the loading of the list it prints "false". Can anybody please tell me why this happens and how to fix it?

    Read the article

  • Sharing on Github

    - by Alan
    Over the past couple weeks I have gotten a lot of help from StackOverflow users on a project, and rather than keep the finished product to myself I wanted to share it unencumbered by licenses, but don't want there to be so much legwork during installation that users shy away from trying it. I am about to post it to Github and choosing public domain licensing. I would like to to be super simple for users to make use of and just FTP it up and go. That being said, do I need to make sure I remove things like the JQuery file, and other GPL / MIT licensed dependencies that I didn't write but that my code depends on? I haven't removed any copyright notices from the other code and all of it open source, it would just be nice if users could download everything at once while of course not trying to represent that I am the license holder of the dependencies. Inside my files are also some snippets, do those have to be externalized with installation instructions or can it be posted as is? Here is an example, my nav.php file is 115 lines long and I have these at the top: <script type="text/javascript" src="./js/ddaccordion.js"> /*********************************************** * Accordion Content script- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * Visit http://www.dynamicDrive.com for hundreds of DHTML scripts * This notice must stay intact for legal use ***********************************************/ </script> <link href="css/admin.css" rel="stylesheet"> <script type="text/javascript"> ddaccordion.init({ headerclass: "submenuheader", //Shared CSS class name of headers group contentclass: "submenu", //Shared CSS class name of contents group revealtype: "click", //Reveal content when user clicks or onmouseover the header? Valid value: "click", "clickgo", or "mouseover" mouseoverdelay: 200, //if revealtype="mouseover", set delay in milliseconds before header expands onMouseover collapseprev: false, //Collapse previous content (so only one open at any time)? true/false defaultexpanded: [], //index of content(s) open by default [index1, index2, etc] [] denotes no content onemustopen: false, //Specify whether at least one header should be open always (so never all headers closed) animatedefault: false, //Should contents open by default be animated into view? persiststate: true, //persist state of opened contents within browser session? toggleclass: ["", ""], //Two CSS classes to be applied to the header when it's collapsed and expanded, respectively ["class1", "class2"] togglehtml: ["suffix", "<img src='./images/plus.gif' class='statusicon' />", "<img src='./images/minus.gif' class='statusicon' />"], //Additional HTML added to the header when it's collapsed and expanded, respectively ["position", "html1", "html2"] (see docs) animatespeed: "fast", //speed of animation: integer in milliseconds (ie: 200), or keywords "fast", "normal", or "slow" oninit:function(headers, expandedindices){ //custom code to run when headers have initalized //do nothing }, onopenclose:function(header, index, state, isuseractivated){ //custom code to run whenever a header is opened or closed //do nothing } }) </script>

    Read the article

  • Solaris 11

    - by user9154181
    Oracle has a strict policy about not discussing product features until they appear in shipping product. Now that Solaris 11 is publically available, it is time to catch up. I will be shortly posting articles on a variety of new developments in the Solaris linkers and related bits: 64-bit Archives After 40+ years of Unix, the archive file format has run out of room. The ar and link-editor (ld) commands have been enhanced to allow archives to grow past their previous 32-bit limits. Guidance The link-editor is now willing and able to tell you how to alter your link lines in order to build better objects. Stub Objects This is one of the bigger projects I've undertaken since joining the Solaris group. Stub objects are shared objects, built entirely from mapfiles, that supply the same linking interface as the real object, while containing no code or data. You can link to them, but cannot use them at runtime. It was pretty simple to add this ability to the link-editor, but the changes to the OSnet in order to apply them to building Solaris were massive. I discuss how we came to invent stub objects, how we apply them to build the OSnet in a more parallel and scalable manner, and about the follow on opportunities that have emerged from the new stub proto area we created to hold them. The elffile Utility A new standard Solaris utility, elffile is a variant of the file utility, focused exclusively on linker related files. elffile is of particular value for examining archives, as it allows you to find out what is inside them without having to first extract the archive members into temporary files. This release has been a long time coming. I joined the Solaris group in late 2005, and this will be my first FCS. From a user perspective, Solaris 11 is probably the biggest change to Solaris since Solaris 2.0. Solaris 11 polishes the ground breaking features from Solaris 10 (DTrace, FMA, ZFS, Zones), and uses them to add a powerful new packaging system, numerous other enhacements and features, along with a huge modernization effort. I'm excited to see it go out into the world. I hope you enjoy using it as much as we did creating it. Software is never done. On to the next one...

    Read the article

  • Assessing Relative Maintainability

    - by João Bragança
    We (a contractor, actually) are implementing an off the shelf system to replace a legacy homegrown system for the core domain of the company (designing widgets). Unfortunately both systems will have to run concurrently for some time, as the product just isn't ready yet. Also, the decision was made to only migrate some of the widgets from the legacy system, based on date of last sale activity. Later on a new requirement came down: certain people in the company, most of them outside of the widget development context, want to search all widgets. The search results screen has 3 pieces of data: a GUID, a human readable id that is searchable, and a brief description (may need to be searchable in the future). In the widget details, there will be multiple screens. These screens align very well along SOA / bounded context lines - a screen for marketing data, a screen for sales history, etc. UML ahead! I am probably using the wrong kind of arrows here so please forgive me. The current solution - which is not in production yet - is something like the following: Both systems will be queried and the controller will merge the results. The new system has its own proprietary query language (we've alleviated this a bit with a LINQ provider). It also puts a lot of data on the wire. 15 search results typically run about 60k of unintelligible SOAP-wrapped xml. So I would prefer to avoid querying this system directly. These two systems publish events to help us integrate with other systems, mainly an ERP system. One of these events contains all the data necessary for the search screen. I proposed the following alternative: However I am being told that 'adding another database' will create more maintenance down the road. However, I believe this to be false, as I had to add a relatively simple feature that took several hours longer than anticipated because of this merging code. I want to get a feel for which system is more maintainable in the long run. I personally have not had the burden of maintaining any large system. I want something more than my gut. Specifically I'd like to know if having more, specialized physical databases is more or less maintainable than having less larger physical databases.

    Read the article

  • Oracle Virtualization Friday Spotlight - November 8, 2013

    - by Monica Kumar
    Hands-on Private Cloud Simulator In One Hour Submitted by: Doan Nguyen, Senior Principal Product Marketing Director My aeronautics instructor used to say, "you can’t appreciate flying until you take flight." To clarify, this is not about gearing up in a flying squirrel suit and hopping off a cliff (topic for another blog!) but rather about flying an airplane. The idea is to get hands-on with the controls at the cockpit and experience flight before you actually fly a real plane. After the initial 40 hours of flight time, the concept sank in and it really made sense.This concept is what inspired our technical experts to put together the hands-on lab for a private cloud deployment and management self-service model. Yes, we are comparing the lab to a flight simulator! Let’s look at the parallels: To get trained to fly, starting in the simulator gets you off the ground quicker. There is no need to have a real plane to begin with. In a hands-on lab, there is no need for a real server, with networking and real storage installed. All you need is your laptop The simulator is pre-configured, pre-flight check done. Similarly, in a hands-on lab, Oracle VM and Oracle Enterprise Manager are pre-configured and assembled using Oracle VM VirtualBox as the container. Software installations are not needed. After time spent training at the controls, you can really appreciate the practical experience of flying. Along the same lines, the hands-on lab is a guided learning path, without the encumbrances of hardware, software installation, so you can learn about cloud deployment and management.  However, unlike the simulator training, your time investment with the lab is only about an hour and not 40 hours! This hands-on lab takes you through private cloud deployment and management using Oracle VM and  Oracle Enterprise Manager Cloud Control 12c in an Infrastructure as a service IaaS model. You will first configure the IaaS cloud as the cloud administrator and then deploy guest virtual machines (VMs) as a self-service user. Then you are ready to take flight into the cloud! Why not step into the cockpit now!

    Read the article

  • The Internet Key Wave MW833UP is not recognized in Ubuntu

    - by gio900
    I can't use my Onda MW833UP... :( Any advice? Here is something that someone else may understand: ~$: lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric ~$: lsusb Bus 001 Device 005: ID 1ee8:0012 ~$: dmesg [ 22.709475] cdc_acm 1-1:1.0: ttyACM0: USB ACM device [ 22.714856] usbcore: registered new interface driver cdc_acm [ 22.714866] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters [ 23.520490] ieee80211 phy0: wl_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [ 24.244530] usbcore: registered new interface driver usbserial [ 24.244575] USB Serial support registered for generic [ 24.244673] usbcore: registered new interface driver usbserial_generic [ 24.244681] usbserial: USB Serial Driver core [ 24.265879] USB Serial support registered for GSM modem (1-port) [ 24.285680] usbcore: registered new interface driver option [ 24.285691] option: v0.7.2:USB Driver for GSM modems [ 24.425878] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,commit=600 [ 24.736540] EXT4-fs (sda8): re-mounted. Opts: commit=600 [ 35.705796] Easy slow down manager: checking for SABI support. [ 35.706002] Easy slow down manager: SABI is supported (f5189) [ 36.060099] usbcore: deregistering interface driver uvcvideo [ 139.508061] CE: hpet increased min_delta_ns to 20113 nsec [ 6798.378917] usb 1-1: USB disconnect, device number 5 [ 6809.108232] usb 1-1: new high speed USB device number 6 using ehci_hcd [ 6809.242692] scsi5 : usb-storage 1-1:1.0 [ 6810.241257] scsi 5:0:0:0: CD-ROM Onda Datacard CD-ROM 0001 PQ: 0 ANSI: 0 [ 6810.241841] scsi 5:0:0:1: Direct-Access Onda Storage 0001 PQ: 0 ANSI: 0 [ 6810.271410] sr0: scsi3-mmc drive: 0x/0x caddy [ 6810.272099] sr 5:0:0:0: Attached scsi CD-ROM sr0 [ 6810.272852] sr 5:0:0:0: Attached scsi generic sg1 type 5 [ 6810.279954] sd 5:0:0:1: [sdb] Attached SCSI removable disk [ 6810.281210] sd 5:0:0:1: Attached scsi generic sg2 type 0 [ 6810.380591] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.380617] sr: Sense Key : Hardware Error [current] [ 6810.380625] sr: Add. Sense: No additional sense information [ 6810.613937] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.613972] sr: Sense Key : Hardware Error [current] [ 6810.613984] sr: Add. Sense: No additional sense information [ 6810.673716] usb 1-1: USB disconnect, device number 6 [ 6815.572142] usb 1-1: new high speed USB device number 7 using ehci_hcd [ 6815.706828] cdc_acm 1-1:1.0: ttyACM0: USB ACM device The last 3 lines are where I inserted the Internet key, then reconnected it. usb-device T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 7 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=1ee8 ProdID=0012 Rev=00.01 S: Manufacturer=Onda S: Product=MW833UP S: SerialNumber=9230B35D870F9CB7AE684EACC5C12BE5EC33B26E C: #Ifs= 2 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=02 Prot=01 Driver=cdc_acm I: If#= 1 Alt= 0 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_acm Then there is /dev/ttyACM0. When the key is connected to the USB port, everything that will meant...

    Read the article

  • How should I structure my turn based engine to allow flexibility for players/AI and observation?

    - by Reefpirate
    I've just started making a Turn Based Strategy engine in GameMaker's GML language... And I was cruising along nicely until it came time to handle the turn cycle, and determining who is controlling what player, and also how to handle the camera and what is displayed on screen. Here's an outline of the main switch happening in my main game loop at the moment: switch (GameState) { case BEGIN_TURN: // Start of turn operations/routines break; case MID_TURN: switch (PControlledBy[Turn]) { case HUMAN: switch (MidTurnState) { case MT_SELECT: // No units selected, 'idle' UI state break; case MT_MOVE: // Unit selected and attempting to move break; case MT_ATTACK: break; } break; case COMPUTER: // AI ROUTINES GO HERE break; case OBSERVER: // OBSERVER ROUTINES GO HERE break; } break; case END_TURN: // End of turn routines/operations, and move Turn to next player break; } Now, I can see a couple of problems with this set-up already... But I don't have any idea how to go about making it 'right'. Turn is a global variable that stores which player's turn it is, and the BEGIN_TURN and END_TURN states make perfect sense to me... But the MID_TURN state is baffling me because of the things I want to happen here: If there are players controlled by humans, I want the AI to do it's thing on its turn here, but I want to be able to have the camera follow the AI as it makes moves in the human player's vision. If there are no human controlled player's, I'd like to be able to watch two or more AI's battle it out on the map with god-like 'observer' vision. So basically I'm wondering if there are any resources for how to structure a Turn Based Strategy engine? I've found lots of writing about pathfinding and AI, and those are all great... But when it comes to handling the turn structure and the game states I am having trouble finding any resources at all. How should the states be divided to allow flexibility between the players and the controllers (HUMAN, COMPUTER, OBSERVER)? Also, maybe if I'm on the right track I just need some reassurance before I lay down another few hundred lines of code...

    Read the article

  • Handling bugs, quirks, or annoyances in vendor-supplied headers

    - by supercat
    If the header file supplied by a vendor of something with whom one's code must interact is deficient in some way, in what cases is it better to: Work around the header's deficiencies in the main code Copy the header file to the local project and fix it Fix the header file in the spot where it's stored as a vendor-supplied tool Fix the header file in the central spot, but also make a local copy and try to always have the two match Do something else As an example, the header file supplied by ST Micro for the STM320LF series contains the lines: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... __IO uint16_t BSRRL; /* BSRR register is split to 2 * 16-bit fields BSRRL */ __IO uint16_t BSRRH; /* BSRR register is split to 2 * 16-bit fields BSRRH */ .... } GPIO_TypeDef; In the hardware, and in the hardware documentation, BSRR is described as a single 32-bit register. About 98% of the time one wants to write to BSRR, one will only be interested in writing the upper half or the lower half; it is thus convenient to be able to use BSSRH and BSSRL as a means of writing half the register. On the other hand, there are occasions when it is necessary that the entire 32-bit register be written as a single atomic operation. The "optimal" way to write it (setting aside white-spacing issues) would be: typedef struct { __IO uint32_t MODER; __IO uint16_t OTYPER; uint16_t RESERVED0; .... union // Allow BSRR access as 32-bit register or two 16-bit registers { __IO uint32_t BSRR; // 32-bit BSSR register as a whole struct { __IO uint16_t BSRRL, BSRRH; };// Two 16-bit parts }; .... } GPIO_TypeDef; If the struct were defined that way, code could use BSRR when necessary to write all 32 bits, or BSRRH/BSRRL when writing 16 bits. Given that the header isn't that way, would better practice be to use the header as-is, but apply an icky typecast in the main code writing what would be idiomatically written as thePort->BSRR = 0x12345678; as *((uint32_t)&(thePort->BSSRH)) = 0x12345678;, or would be be better to use a patched header file? If the latter, where should the patched file me stored and how should it be managed?

    Read the article

< Previous Page | 225 226 227 228 229 230 231 232 233 234 235 236  | Next Page >