Search Results

Search found 15376 results on 616 pages for 'once'.

Page 248/616 | < Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >

  • Is it OK to reoccupy my old GitHub username to protect repository redirections?

    - by Idan Arye
    I'm considering changing my GitHub username from the old alias I was using as a kid to my real name. I'm concerned about my repository URLs. GitHub will redirect the old URLs, but if someone creates a new account using my old username and creates a repository with the same name as one of my repositories, the URL redirection will break and the URL will lead to their repository, not mine. Now, this is understandable, and GitHub recommends to not count on the redirect in the long term, and update all the remotes, but I'm concerned about some Vim plugins I'm hosting on GitHub. It's a common practice to manage Vim plugins with Git(either as separate repositories or as submodules), and if one of the plugins' remotes break you'll get error messages when you try to batch-update all your plugins(it happened to me once...). It's not that hard to solve, and the chances that'll happen are slim, but I would still like to avoid causing trouble to the users of my plugins... To prevent this, I think to create a new account with my old username. That way I can avoid the risk of someone else taking my old username and breaking the redirects of my old repositories. While researching this approach I've found GitHub's Name Squatting Policy. According to that policy, GitHub can delete or rename inactive accounts. To my understanding, they do this to prevent Cybersquatting, but surely this isn't the case with my fake account - I'm not holding someone else's name in an attempt to sell it to them, I'm merely occupying a name I was using to protect my old URLs... So, is it acceptable to go with this plan an create a fake account with my old username?

    Read the article

  • libgdx actors and instant actions

    - by vaati
    I'm having trouble with actors and actions. I have a list of actors, they all have either no action, or 1 sequence action This sequence action has either : a couple of actions (some are instant, some have duration 0) a couple of actions followed by a parallel action. My problem is the following: some of the instant actions are used to set the position and the alpha of the actor. So when one of the action is "move to x,y and set alpha to 0" the actor is visible for one frame at position 0,0 , move instantly to x,y for the next frame, and then disappears. Though this behaviours is to be expected, I want to avoid it. How can I achieve that? I tried to intercept the actions before I put actors in the stage but I need the stage width/height for some actions. So something like : Action actionSequence = actor.getActions().get(0); Array<Action> actions = ((SequenceAction) actionSequence).getActions(); for(Action act : actions){ if(act.act(0)) System.out.println("action " + act.toString() + " successfully run"); else System.out.println("action " + act.toString() + " wasn't instant"); } won't work. It gets even more complicated when an actor can also have a repeat action in stead of the sequence action (because you have to only run the actions that have duration 0 once without repeat, and then start the repeat). Any help is appreciated.

    Read the article

  • How to setup an iTunes library to use between two Macs?

    - by stead1984
    As you can tell this is no where near work related. I have an iMac G5 where my itunes is currently hosted, I have also just got a new MacBook Pro. What I want to be able to do is sync my itunes library from my iMac to my MacBook Pro, that way it can be accessible away from my home network, then if I make any changes to the itunes library (like change a track name) it will sync these changes back once I connect back to the home network. My current itunes contains music, videos, podcasts, playlists and iPhone apps, I would also like iTunes to track play counts collectively between the iMac and the MacBook Pro.

    Read the article

  • MySQL - Configuration

    - by Stuart Brierley
    Having previously detailed how to install MySQL Server, the next step is configuring MySQL. The MySQL configuration wizard can either be run immediately following installation from the MySQL installation wizard or manually from the Start Menu. Following the splash screen you can then choose whether to run a detailed or standard configuration. The detailed configuration allows you to create the optimal configuration for your specific machine, whereas the standard configuration creates a general configuration that can then be manually tuned. I chose detailed.   You are then asked to choose the type of server instance that is being configured. In this case it is a developer machine. Following this you are asked to choose the type of database usage that you expect on the server. I opted for multifunctional. You must then specify the location of the InnoDB tablespace.   Next specify the number of concurent connections to the server.   Now you must configure the networking options. I left the Strict mode enabled as this is the recommended option, but I disabled TCP/IP networking as I wanted to restrict this MySQL installation to the local machine only.   Set the character set that is best suited to your use - for me this was the default standard character set. Next up is the option to run MySQL as a service and whether or not to include the mysql dircetories in the windows PATH. I kept the install as a windows service option enabled, but unchecked the Launch MySQL server automatically option. This is because I only wanted MySQL running when I specifically want to use it. I also enabled the include in windows PATH option.   You can then change the security settings for the mysql installation. I opted to change the root password, disable root from local machines and disable annoymous access.   You are now ready to execute the configuration.   Once completed you should hopefully see the completed screen with lots of nice ticks against the various configuration tasks.

    Read the article

  • How to Get a Smartphone-Style Word Suggestion on Windows

    - by Zainul Franciscus
    Have you ever wished that you can type faster and better in Windows ? Then you’re in luck, because today we’ll show you how to get a smartphone’s word suggestion in Windows. To accomplish that, you need to install AI Type, a software that gives word suggestion when you write in Windows.  AI Type not only fulfils our gratification to have a smartphone-style word suggestion for Windows,  AI Type also improves our writings by suggesting word according to its context. It  will also try to match words according to the  probability in which other users may have used it. Installing AI Type is a breeze; Just download the installer from AI Type website, run the executable, fill in a registration form, and you’re all set to use AI Type for your daily writing. Once you’re done with the installation, AI Type appears on your system tray. Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic] Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More

    Read the article

  • Dialing into multiple PPP connections on Ubuntu

    - by sharjeel
    I have multiple 3G USB based Modems. I would like them to keep connected simultaneously, NOT necessarily aggregating their bandwidth; a separate intelligent application would manage their utilization effectively. However I am running into problem of setting up proper routes for the ppp0,ppp1 interfaces: when one of them connects, other's entries in the routing table get updated so it is no more usable. If I reconnect the second one, it would override the first one's routing entries. If I do it over and over, sometimes both of them's entries disappear while in rare cases the two work well. I have tried it both using NetworkManager as well as WVDial but issue pops up in both of these. Perhaps both of them use same PPP dialer at the backend and thats why this issue appears. What is the proper solution to make them work together? In the long run, I'd also like them to automatically dial in once USB gets connected.

    Read the article

  • Problems with Intel Video Resolution on Acer Laptop Wide Display

    - by ricstr
    I have an ACER Aspire 5332 laptop which I have just installed Ubuntu 12.04 x64, which is causing some issues with the video display on boot and video resolution. First and foremost, it will only boot past the purple screen if GRUB has been edited to replace 'quick splash' with 'nomodeset'. Secondly, once it has booted with the the 'nomodeset' option, it does not allow me to change the resolution higher or lower from 1024 x 786. Is it OK to use the 'nomodeset' for normal use? Will this compromise performance of other devices? The video card is an on-board one, integrated within the Intel GL40 chip-set. The display is a wide-screen LCD, and under Windows could operate under various resolutions. Ideally I would like it to operate on a resolution to fit the wide-screen display as it a bit stretched out at the moment, and less desktop space as I am used to. I believe the optimal resolution is 1366 x 768. Below is some information from the terminal which may be useful. ricstr@Aspire-5332:~$ lspci | grep -i VGA 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 09) ricstr@Aspire-5332:~$ xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 1024 x 768, current 1024 x 768, maximum 1024 x 768 default connected 1024x768+0+0 0mm x 0mm 1024x768 0.0*

    Read the article

  • Powershell, Task Scheduler or loop and sleep

    - by Paddy Carroll
    I have a job that needs to go off every minute or so, it loads a DLL i have written in C# that retrieves state for an SQL Server Mirror (Primary, Mirror and witness) for a number of databases; it allows us to poke DNS to show where the primary instances are. Please don't mention Clustering - We're not doing that. I can't be arsed to write a service, there simply isn't enough time do I Task Scheduler - every minute: Invoke a powershell script that loads the DLL does the business Task scheduler - At Startup : Invoke a similer powershell script that loads the DLL once but then loops and sleeps, refreshing the Object that the DLL exposes. Pros and cons?

    Read the article

  • powershell task scheduler or loop and sleep

    - by Paddy Carroll
    I have a job that needs to go off every minute or so, it loads a DLL written in C# that retrieves state for an SQL Server Mirror (Primary, Mirror and witness) for a number of databases; it allows us to poke DNS to show where the primary instances are. Please don't mention Clustering - We're not doing that. I can't be arsed to write a service, there simply isn't enough time do I Task Scheduler - every minute: Invoke a powershell script that loads the DLL does the business Task scheduler - At Startup : Invoke a similer powershell script that loads the DLL once but then loops and sleeps, refreshing the Object that the DLL exposes. Pros and cons?

    Read the article

  • What calls trigger a new batch?

    - by sebf
    I am finding my project is starting to show performance degradation and I need to optimize it. The answer to my previous question and this presentation from NVidia have helped greatly in understanding the performance characteristics of code using the GPU but there are a couple of things that aren't clear that I need to know to optimize my drawing. Specifically, what calls make the distinction between batches. I know that any state changes cause a new batch, so that includes: Render State Changes Buffer Changes Shader Changes Render Target Changes Correct? What else counts as a 'state change'? Does each Draw**Primitive() call constitute a new batch? Even if I were to issue the same call twice, with no state changes, or call it once on on part of the buffer, then again on another? If I were to update a buffer, but not change the bindings, would that be a new batch? That presentation and a DX9 page suggest using all of the texture slots available, which I take to mean loading multiple objects in 'parallel' by mapping their buffers/shaders/textures to slots 1-16. But I am not sure how this works - surely to do this you would need to change the buffer binding and that would count as a state change? (or is it a case of you do but it saves 16 calls so its OK?)

    Read the article

  • Rotating wheel with touch adding velocity

    - by Lewis
    I have a wheel control in a game which is setup like so: - (void)ccTouchesMoved:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; CGPoint location = [touch locationInView:[touch view]]; location = [[CCDirector sharedDirector] convertToGL:location]; if (CGRectContainsPoint(wheel.boundingBox, location)) { CGPoint firstLocation = [touch previousLocationInView:[touch view]]; CGPoint location = [touch locationInView:[touch view]]; CGPoint touchingPoint = [[CCDirector sharedDirector] convertToGL:location]; CGPoint firstTouchingPoint = [[CCDirector sharedDirector] convertToGL:firstLocation]; CGPoint firstVector = ccpSub(firstTouchingPoint, wheel.position); CGFloat firstRotateAngle = -ccpToAngle(firstVector); CGFloat previousTouch = CC_RADIANS_TO_DEGREES(firstRotateAngle); CGPoint vector = ccpSub(touchingPoint, wheel.position); CGFloat rotateAngle = -ccpToAngle(vector); CGFloat currentTouch = CC_RADIANS_TO_DEGREES(rotateAngle); wheelRotation += (currentTouch - previousTouch) * 0.6; //limit speed 0.6 } } I update the rotation of a the wheel in the update method by doing: wheel.rotation = wheelRotation; Now once the user lets go of the wheel I want it to rotate back to where it was before but not without taking into account the velocity of the swipe the user has done. This is the bit I really can't get my head around. So if the swipe generates a lot of velocity then the wheel will carry on moving slightly in that direction until the overall force which pulls the wheel back to the starting position kicks in. Any ideas/code snippets?

    Read the article

  • Designing spawning system

    - by Vlad
    I played this game recently http://www.kongregate.com/games/JuicyBeast/knightmare-tower and I am amazed by the way how different monsters are beign spawned. I personally developed my own shooter game and I added time based but also count based spawing system. By count based I mean when there are 5 enemies on stage stop spawning. But this is one example. My question is how are these spawning mechanism built, is there some pattern or some theory how they are built? Are there some online materials/pages where I can improve my knowledge? To sumarize, let just say we have 6 types of monsters. I start the game and kill of monsters of type 1,2 and 3 all the time. Once I pass the first ceiling, like in the game above, monster type 4 appear. ANd so on. As I progress trough the game, the same system of 6 types of monsters stay, but they become more and more resilient and dangerous. So I must also improve to be able to destroy the same monsters but now stronger. My question is simple, are there some theories built or written for developing this type of inteligent systems? Note: This is a general question, not tied up with some game or how exactly should the game work. I am capable to program my own mechanisms but I think I need some help. Thanks.

    Read the article

  • Outlook 2010: How do I mark one recurring event public?

    - by goober
    My office utilizes Outlook 2010 and Exchange for e-mail, and our calendars show free/busy information by default. Background I work from home once a week, so I have created an event that lists me as tentative for the entire workday, titled "Working from Home - Available Remotely". However, those attempting to schedule a meeting with me won't see this title, and therefore won't think they can schedule an event. As much as I'd like to get out meetings (!) it's important that folks be able to schedule with me. Question Is there a way to make the title/details public for this one recurring event so that when others attempt to schedule a meeting with me, Attempted Solutions I've tried creating a public calendar and sharing all the details of that calendar. However, all of my calendars are not included when someone wants to schedule with me, and so I'm shown as free unless someone specifically looks at my public calendar. I've Googled around, to no avail.

    Read the article

  • IE Sessions in Terminal Server

    - by Jacob
    Currently we are using WADE middleware for our order processing operation. We have about 40 operators that use 1 terminal server to open IE 8 to access the WADE middleware. To me, it's random, but every now and then someone will come to me and tell me that IE has a "Page cannot be displayed" or "HTTP Error 500" error. I did a bit of testing on my local machine and I never get this error while doing normal operations. Although, when I open one session with username "test" and then login to the wade admin console as admin, I run into problems. I do not run into problems until I logout of the wade admin. Once I logout of the Wade admin, my "test" session says "page cannot be displayed". This makes me think the IE user sessions on the terminal service are cross talking. Does anyone have any possible settings I can change in IE or do you think this is an issue with the middleware? The Terminal Server is Windows 2003, btw.

    Read the article

  • When is the default storage rule not really the default storage rule?

    - by Kevin Smith
    In 11g WebCenter Content (WCC) introduced dispersion rules in the vault and weblayout directory paths to better distribute content across the directories. The dispersion rule was based on dRevClassID. The only problem with this is that dRevClassID did not remain the same when you copied content from one WCC instance to another using Archiver like in a contribution-consumption scenario. This could cause problems because the web-viewable path would not be the same between the contribution and consumption instances. In the PS5 (11.1.1.6.0) release of WCC they addressed this by configuring the File Store Provider (FSP) so that all new content would use a storage rule with a dispersion rule based on dDocName, which would stay the same when content was copied to another WCC instance. To support migration from older versions of WCC they left the default storage rule unchanged and created a new storage rule called DispByContentId and made that the default storage rule for all new content. I only stumbled upon this a while back when I was trying to change the FSP configuration so that all content used a webless storage rule. I changed the default storage rule, restarted WCC, and checked in a new content item. To my surprise the new content was not created as webless. I struggled with this for a while until I noticed there were multiple storage rules defined in the FSP configuration. When I looked at the default value for the xStorageRule field in Configuration Manager, sure enough it was no longer default, but was now DispByContentId. Once I updated the DispByContentId storage rule to webless and restarted WCC all my new content was now created using the webless storage rule, just like I wanted. I noticed when I was creating this blog post that the default storage rule is also listed on the File Store Provider Information page, but I guess I didn't see that when I originally did this.

    Read the article

  • How do you explain to an "agile" team that they still need to plan the software they write?

    - by user23157
    This week at work I got agiled yet again. Having gone through the standard agile, TDD, shared ownership, ad hoc development methodology of never planning anything beyond a few user stories on a piece of card, verbally chewing the cud over the technicallities of a 3rd party integration ad nauseam without ever doing any real thinking or due dilligence and architecturally coupling all production code to the first test that comes into anyone's head for the past few months we reach the end of a release cycle and lo and behold the main externally visible feature that we have been developing is too slow to use, buggy, becoming labyrinthinly complex and completely inflexible. During this process "spikes" were done but never documented and not a single architectural design was ever produced (there was no FS, so what the hell eh, if you don't know what you are developing, how can you plan or research it?) - the project passed from pair to pair, each of whom only ever focused on a single user story at a time and well the result was inevitable. To resolve this I went off the radar, went (the dreaded) waterfall, planned, coded and basically didn't swap off the pair and tried as much as I could to work alone - focusing on solid architecture and specifications rather than unit tests which will come later once everything is pinned down. The code is now much better and is actually totally usable, flexible and fast. Certain people seem to have really resented me doing this and have gone out of their way to sabotage my efforts (possibly unconsciously) because it goes against the holy process of agile. So how do you, as a developer, explain to the team that it is not "un-agile" to plan their work, and how do you fit planning into the agile process? (I'm not talking about the IPM; I'm talking about sitting down with a problem and sketching out an end-to-end design that says how a problem should be solved in sufficient detail that anyone who works on the problem knows what architecture and patterns they should be using and where the new code should integrate into existing code)

    Read the article

  • Low-level game engine renderer design

    - by Mark Ingram
    I'm piecing together the beginnings of an extremely basic engine which will let me draw arbitrary objects (SceneObject). I've got to the point where I'm creating a few sensible sounding classes, but as this is my first outing into game engines, I've got the feeling I'm overlooking things. I'm familiar with compartmentalising larger portions of the code so that individual sub-systems don't overly interact with each other, but I'm thinking more of the low-level stuff, starting from vertices working up. So if I have a Vertex class, I can combine that with a list of indices to make a Mesh class. How does the engine determine identical meshes for objects? Or is that left to the level designer? Once we have a Mesh, that can be contained in the SceneObject class. And a list of SceneObject can be placed into the Scene to be drawn. Right now I'm only using OpenGL, but I'm aware that I don't want to be tying OpenGL calls right in to base classes (such as updating the vertices in the Mesh, I don't want to be calling glBufferData etc). Are there any good resources that discuss these issues? Are there any "common" heirachies which should be used?

    Read the article

  • Tools for managing eCommerce backend

    - by rboarman
    I am working with an eCommerce company that has outgrown their hacked together backend for managing inventory, pricing and feeds to various shopping engines (Yahoo, 3d cart, Amazon, etc.). They currently manage about 12,000 skus and are doing $40M in revenue. Their internal people are working on a new Magento solution, but that is six months away and they need to replace/improve their current solution in order to hold them over. Their current solution was developed by two people who have left the company. What tools/architecture do other eCommerce sites use to manage their inventory, pricing, product descriptions and feed generation for the shopping engines? The current solution looks like this: 1) Inventory, pricing and product descriptions are maintained in a database and in NetSuite by employees 2) New products are added to the database via import 3) Twice a week data is extracted into a giant Excel spreadsheet 4) The Excel file adjusts pricing based on some simple algorithms 5) The Excel file exports about six different csv feeds which are manually uploaded to Amazon, 3d cart, Yahoo, Google and Merchant Advantage a. Each feed is a variant of the product which different field names and formatting b. Pricing levels differ between feeds c. Some products are not sent to all feeds 6) Orders are manually parsed and the inventory is adjusted as needed once product is sold The new solution should: 1) Import data from ODBC, CSV and NetSuite (CSV via ftp) 2) Apply pricing changes via simple algorithms (< $80 add $10, $200 add $25) 3) Ensure margins are being met 4) Format and generate a bunch of CSV and XML feeds 5) Perhaps upload feeds to shopping engines automatically What I need to do is replace the Excel file with something that is maintainable and automated. Something in the .Net stack is preferable but not mandatory. I’ve been looking at BizTalk but it may take too long to develop and deploy. Any suggestions?

    Read the article

  • How to cache streaming video and silverlight with squid windows reverse proxy

    - by V. Romanov
    We have an intranet web server running a silverlight application (ACTUS media monitor if anyone cares to know). The server is used to record video and stream it to clients through a CDN solution. We want to put a reverse proxy in between the server and CDN provider in order to remove the office network bottleneck that's currently strangling us. I've set up SQUID for windows on a separate machine outside the network using squid BasicAccelerator configuration setting. It seems to work as far as the reverse proxy is concerned, requests are forwarded and the application is working but it doesn't seem to cache anything (no space is used on the drive where squid is installed). I found to explicit setting to turn caching on in squid, so i assume it's on by default. Perhaps I need some other trick to make the video and/or silverlight cacheable? Any help will be appreciated. Any info you need to help me will be provided at once. Thanks in advance!

    Read the article

  • How to structure git repositories for project?

    - by littledynamo
    I'm working on a content synchronisation module for Drupal. There is a server module, which sits on ona website and exposes content via a web service. There is a also a client module, which sits on a different site and fetches and imports the content at regular intervals. The server is created on Drupal 6. The client is created on Drupal 7. There is going to be a need for a Druapl 7 version of the server. And then there will be a need for a Drupal 8 version of both the client and the server once it is released next year. I'm fairly new to git and source control, so I was wondering what is the best way to setup the git repositories? Would it be a case of having a separate repository for each instance, i.e: Drupal 6 server = 1 repository Drupal 6 client = 1 repository Drupal 7 server = 1 repository Drupal 7 client = 1 repository etc Or would it make more sense to have one repository for the server and another for the client then create branches for each Drupal version? Currently I have 2 repositories - one for the client and another for the server.

    Read the article

  • QapTcha error issue. Works locally, not on live server [migrated]

    - by BlassFemur
    I am adding QapTcha (http://demos.myjqueryplugins.com/qaptcha/) to a website that I am working on and I'm getting the error "Uncaught TypeError: Cannot read property 'error' of null". What's weird to me is everything is working perfectly locally. No errors or anything. Once I uploaded via ftp to the live server, I get the above error. Below is the block of code that seems to be generating the error: Slider.draggable({ revert: function(){ if(opts.autoRevert) { if(parseInt(Slider.css("left")) (bgSlider.width()-Slider.width()-10)) return false; else return true; } }, containment: bgSlider, axis:'x', stop: function(event,ui){ if(ui.position.left (bgSlider.width()-Slider.width()-10)) { // set the SESSION iQaptcha in PHP file $.post(opts.PHPfile,{ action : 'qaptcha', qaptcha_key : inputQapTcha.attr('name') }, function(data) { if(!data.error) Uncaught TypeError: Cannot read property 'error' of null { Slider.draggable('disable').css('cursor','default'); inputQapTcha.val(''); TxtStatus.text(opts.txtUnlock).addClass('dropSuccess').removeClass('dropError'); form.find('input[type=\'submit\']').removeAttr('disabled'); if(opts.autoSubmit) form.find('input[type=\'submit\']').trigger('click'); } },'json'); } } }); I'm not really sure what's going on as to why it works locally and not on the server. Any help/ suggestions would be appreciated. Thanks

    Read the article

  • SUM of metric for normalized logical hierarchy

    - by Alex254
    Suppose there's a following table Table1, describing parent-child relationship and metric: Parent | Child | Metric (of a child) ------------------------------------ name0 | name1 | a name0 | name2 | b name1 | name3 | c name2 | name4 | d name2 | name5 | e name3 | name6 | f Characteristics: 1) Child always has 1 and only 1 parent; 2) Parent can have multiple children (name2 has name4 and name5 as children); 3) Number of levels in this "hierarchy" and number of children for any given parent are arbitrary and do not depend on each other; I need SQL request that will return result set with each name and a sum of metric of all its descendants down to the bottom level plus itself, so for this example table the result would be (look carefully at name1): Name | Metric ------------------ name1 | a + c + f name2 | b + d + e name3 | c + f name4 | d name5 | e name6 | f (name0 is irrelevant and can be excluded). It should be ANSI or Teradata SQL. I got as far as a recursive query that can return a SUM (metric) of all descendants of a given name: WITH RECURSIVE temp_table (Child, metric) AS ( SELECT root.Child, root.metric FROM table1 root WHERE root.Child = 'name1' UNION ALL SELECT indirect.Child, indirect.metric FROM temp_table direct, table1 indirect WHERE direct.Child = indirect.Parent) SELECT SUM(metric) FROM temp_table; Is there a way to turn this query into a function that takes name as an argument and returns this sum, so it can be called like this? SELECT Sum_Of_Descendants (Child) FROM Table1; Any suggestions about how to approach this from a different angle would be appreciated as well, because even if the above way is implementable, it will be of poor performance - there would be a lot of iterations of reading metrics (value f would be read 3 times in this example). Ideally, the query should read a metric of each name only once.

    Read the article

  • Nordics OTN ACE Tour 2013 - Recap

    - by Mike Dietrich
    The Nordics OTN ACE Tour 2013 with stops in Stockholm, Ballerup/Copenhagen and Oslo is over. A very intense week with plenty of excellent presentations from Lonneke Dikmans, Sten Vesterli, Tim Hall and others. I'm always impressed how much those people know and how good they present. It's such a great learning experience. And there's always some time to talk about weired things apart from the Oracle cosmos. So thanks a lot, folks - it was a pleasure to travel with you. And many many thanks also to the people from ORCAN, DOUG and OUGN. Everything worked out so well. And thanks for the great gifts. the dinners, everything!!! Of course a special thanks to all the people who went to my presentations. Hope you've enjoyed it - and sorry for any overtiming But as Tim said yesterday in the Shuttle Bus back to the airport: "45 min slots don't work out at all" The final slide set about "Different Ways to Upgrade, Migrate and Consolidate into Oracle Database 12c including Oracle Multitenant, New Features and other stuff" can be downloaded via this link. Hope to see you all again soon - and let me know once you have successfully upgraded to Oracle Database 12c or in case you'd like to become one of our Upgrade Reference Customers. Cheers - Mike PS: One thing I couldn't really understand - why is that thing below not labeled simply GRAPE JUICE??? And who's honestly drinking that?

    Read the article

  • How to determine which source files are required for an Eclipse run configuration

    - by isme
    When writing code in an Eclipse project, I'm usually quite messy and undisciplined in how I create and organize my classes, at least in the early hacky and experimental stages. In particular, I create more than one class with a main method for testing different ideas that share most of the same classes. If I come up with something like a useful app, I can export it to a runnable jar so I can share it with friends. But this simply packs up the whole project, which can become several megabytes big if I'm relying on large library such as httpclient. Also, if I decide to refactor my lump of code into several projects once I work out what works, and I can't remember which source files are used in a particular run configuration, all I can do it copy the main class to a new project and then keep copying missing types till the new project compiles. Is there a way in Eclipse to determine which classes are actually used in a particular run configuration?

    Read the article

  • Read only file system error on ubuntu after partitioning

    - by Ranjith R
    I am not sure if I am the root cause of this problem but this is what I did: I wanted latest ubuntu and latest linux mint together on my thinkpad laptop. Windows 7 was already there. I already had mint. So I put in the USB with ubuntu image and started installing ubuntu. I chose to install side by side. It was taking a long time to finish defragmenting and partitioning. I decided to give up as I became a little impatient and I pressed the skip button. After the skipping, I realized that the partitioning was complete and went ahead with installing ubuntu. Now the linux mint OS starts reporting the file system as read only at least once every day and I have restart and tell the OS to fix errors in hard disk. After I press F key, the system fixes the issues, restarts and all is well again. Is there some way to fix the issue permanently. I think reinstalling will solve the issues, but I can not do it as I have a lot of data and I will have reinstall and configure a lot of softwares that I use daily. I checked the smart check in disk utility and the hard disk seems to be fine Also I checked both the partitions for errors with disk utility and the report says they are fine. Is there something I can do before I reinstall?

    Read the article

< Previous Page | 244 245 246 247 248 249 250 251 252 253 254 255  | Next Page >