Search Results

Search found 648 results on 26 pages for 'austin danger powers'.

Page 21/26 | < Previous Page | 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • External HDD connecting via USB disconnects wireless LAN connection

    - by Kensai
    Strange problem. I have this MEDION Akoya PC that has a dedicated bay to slide an external HDD sold separately. It's very handy indeed cause the slot is providing a fast USB 3 connection and power to the HDD unit, without extra cables. All works fine except this show-stopper behavior to disconnect me from the router once I slide in the unit and it powers up. The moment I connect the unit the (normally) three-four WiFi connections I see in my neighborhood disappear and my own to the router loses its signal strength (no Internet traffic is possible). After a while it throws me off that one as well, never to connect me again as long as the unit is powered. Once I disconnect the HDD the various signals come back and it automatically reconnects to my own. What takes? Are we in front of a serious design fault by MEDION here? Does the spinning of the HDD on top of the PC cause electromagnetic interference strong enough to throw off my WiFi connectivity? Is it a simple USB problem? Some kind of strange hardware conflict? Where should I look?

    Read the article

  • Nvidia 9800GT randomly prevents computer from booting

    - by Blender
    My computer has been running Windows and Linux perfectly fine with my 9800GT for the past year or so, but today it refused to boot. When I press the power button, this is what happens: Power button flashes once. Fans whir. Graphics card makes clicking noise. Computer reboots. Go back to 1. The cycle just keeps going, and I have to yank the cord to make the computer stop. After about 30 attempts at booting it, the computer powers on and everything works. I'm pretty sure that the graphics card isn't malfunctioning, as I've been GPU computing on it for a while now without any hiccups. But the strange thing is, the computer boots perfectly fine in only 5 boots if I remove the card. The computer is a HP Pavilion a6028x Desktop PC: Processor: AMD Athlon 64 X2 (W) 4600+ 2.4 GHz (AM2 socket) Motherboard: ECS MCP61PM-HM (Nettle 1) RAM: 3GB DDR2 (two different brands) More specs here Does anybody know what could be the problem? Any help or information would be greatly appreciated!

    Read the article

  • What can cause the system to freeze in a way where even the reset button takes a long time to react?

    - by ThiefMaster
    What can be the reason for system freezes that are so "hard" that even the hardware reset button takes about 3 seconds until it actually resets the system (and then it actually powers down and up again instead of doing a "clean" hard reset like when pressing it during a normally running system). Since it initially happened mainly while playing videos from YouTube I suspected the graphics card - however, I replaced it recently and it did not change it. It still happens from time to time (and sometimes more often, like a few times times in the last few hours). The system is running Windows 7 - but I don't think this matters since I don't think any software, not even the OS, can actually affect the reset button's behaviour. The PC is not overheated and the freezes happen randomly. There is also no malware on the system. The CPU is an Intel Core i7-920 on a Gigabyte EX58-UD5 mainboard. What could be the cause for this problem? Faulty RAM? I did not run a full memtest86 check yet, but I wonder if there is a more likely issue than faulty RAM - checking 12G of ram does take some time after all! There are no entries in the event log - but that's what I expected since the system freezes so hard that I doubt it has time to write anything to any log.

    Read the article

  • Pigs in Socks?

    - by MightyZot
    My wonderful wife Annie surprised me with a cruise to Cozumel for my fortieth birthday. I love to travel. Every trip is ripe with adventure, crazy things to see and experience. For example, on the way to Mobile Alabama to catch our boat, some dude hauling a mobile home lost a window and we drove through a cloud of busting glass going 80 miles per hour! The night before the cruise, we stayed in the Malaga Inn and I crawled UNDER the hotel to look at an old civil war bunker. WOAH! Then, on the way to and from Cozumel, the boat plowed through two beautiful and slightly violent storms. But, the adventures you have while travelling often pale in comparison to the cult of personalities you meet along the way.  :) We met many cool people during our travels and we made some new friends. Todd and Andrea are in the publishing business (www.myneworleans.com) and teaching, respectively. Erika is a teacher too and Matt has a pig on his foot. This story is about the pig. Without that pig on Matt’s foot, we probably would have hit a buoy and drowned. Alright, so…this pig on Matt’s foot…this is no henna tatt, this is a man’s tattoo. Apparently, getting tattoos on your feet is very painful because there is very little muscle and fat and lots of nifty nerves to tell you that you might be doing something stupid. Pig and rooster tattoos carry special meaning for sailors of old. According to some sources, having a tattoo of a pig or rooster on one foot or the other will keep you from drowning. There are many great musings as to why a pig and a rooster might save your life. The most plausible in my opinion is that pigs and roosters were common livestock tagging along with the crew. Since they were shipped in wooden crates, pigs and roosters were often counted amongst the survivors when ships succumbed to Davy Jones’ Locker. I didn’t spend a whole lot of time researching the pig and the rooster, so consider these musings as you would a grain of salt. And, I was not able to find a lot of what you might consider credible history regarding the tradition. What I did find was a comfort, or solace, in the maritime tradition. Seems like raw traditions like the pig and the rooster are in danger of getting lost in a sea of non-permanence. I mean, what traditions are us old programmers and techies leaving behind for future generations? Makes me wonder what Ward Christensen has tattooed on his left foot.  I guess my choice would have to be a Commodore 64.   (I met Ward, by the way, in an elevator after he received his Dvorak awards in 1992. He was a very non-assuming individual sporting business casual and was very much a “sailor” of an old-school programmer. I can’t remember his exact words, but I think they were essentially that he felt it odd that he was getting an award for just doing his work. I’m sure that Ward doesn’t know this…he couldn’t have set a more positive example for a young 22 year old programmer. Thanks Ward!)

    Read the article

  • The dislikes of TDD

    - by andrewstopford
    I enjoy debates about TDD and Brian Harrys blog post is no exception. Brian sounds out what he likes and dislikes about TDD and it's the dislikes I'll focus on. The idea of having unit tests that cover virtually every line of code that I’ve written that I have to refactor every time I refactor my code makes me shudder.  Doing this way makes me take nearly twice as long as it would otherwise take and I don’t feel like I get sufficient benefits from it. Refactoring your tests to match your refactored code sounds like the tests are suffering. Too many hard dependencies with no SOLID concerns are a sure fire reason you would do this. Maybe at the start of a TDD cycle you would need to do this as your design evolves and you remove these dependencies but this should quickly be resolved as you refactor. If you find your self still doing it then stop and look back at your design. Don’t get me wrong, I’m a big fan of unit tests.  I just prefer to write them after the code has stopped shaking a bit.  In fact most of my early testing is “manual”.  Either I write a small UI on top of my service that allows me to plug in values and try it or write some quick API tests that I throw away as soon as I have validated them. The problem with this is that a UI can make assumptions on your code that then just unit test around and very quickly the design becomes bad and you technical debt sweeps in. If you want to blackbox test your code with a UI then do so after your TDD cycles not before. This is probably by biggest issue with a literal TDD interpretation.  TDD says you never write a line of code without a failing test to show you need it.  I find it leads developers down a dangerous path.  Without any help from a methodology, I have met way too many developers in my life that “back into a solution”.  By this, I mean they write something, it mostly works and they discover a new requirement so they tack it on, and another and another and when they are done, they’ve got a monstrosity of special cases each designed to handle one specific scenario.  There’s way more code than there should be and it’s way too complicated to understand. I believe in finding general solutions to problems from which all the special cases naturally derive rather than building a solution of special cases.  In my mind, to do this, you have to start by conceptualizing and coding the framework of the general algorithm.  For me, that’s a relatively monolithic exercise. TDD is an development pratice not a methodology, the danger is that the solution becomes a mass of different things that violate DRY. TDD won't solve these problems, only good communication and practices like pairing will help. Above all else an assumption that TDD replaces a methodology is a mistake, combine it with what ever works for your team\business but only good communication will help. A good naming scheme\structure for folders, files and tests can help you and your team isolate what tests are for what.

    Read the article

  • Nest reinvents smoke detectors. Introduces smart and talking smoke detector that keeps quite when you wave

    - by Gopinath
    Nest, the leading smart thermostat maker has introduced a smart home device today- Nest Protect, a smart, talking smoke & carbon monoxide detector that can quite when you wave your hand. Less annoyances and more intelligence Smoke detectors are around for hundreds of years and playing a major role in providing safety from fire accidents at home. But the technology of these devices is stale and there is no major innovation for the past several years. With the introduction of Nest Protect, the landscape of smoke detectors is all set to change just like how Nest thermostat redefined the industry two years ago. Nest Protect is internet enabled and equipped with motion- and smoke-detection sensors so that when it starts beeping you can silence it by waving hand instead of doing circus feats to turn off the alarm. Everyone who cooks in a home equipped with smoke detector would know how annoying it is to turn off sensitive smoke detectors that goes off control quite often. Apart from addressing the annoyances of regular smoke detector, Nest Protect has talking capabilities. It can alert users with clear & actionable instructions when it detects a danger. Instead of harsh beeps it actually speak to you so you know what is happening. It will tell you what smoke it has detected and in which room it is detected. Multiple Nest Protects installed in a home can communicate with each other. Lets say that there is a smoke in bed room, the Nest Protect installed in bed room shares this information to all Nest Protects installed in the home and your kitchen device can alert you that there is a smoke in bed room. There is an App for that The internet enabled Nest Protect has an app to view its status and various alerts. When the Protect is running on low battery it alerts you to replace them soon. If there is a smoke at home and you are away, you will get message alerts. The app works on all major smartphones as well as tablets. Auto shuts down gas furnaces/heaters on smoke Apart from forming a network with other Nest Protect devices installed at home, they can also communicate with Nest Thermostat if it is installed. When carbon monoxide is detected it can shut off your gas furnace automatically. Also with the help of motion detectors it improves Nest Thermostat’s auto-away functionality. It looks elegant and costs a lot more than a regular smoke detector Just like Nest Thermostat, Nest Protect is elegant and adorable. You just fall in love with it the moment you see it. It’s another master piece from the designer of Apple’s iPod. All is good with the Nest Protect, except the price!! It costs whooping $129, which is almost 4 times more expensive than the best selling conventional thermostats available at $30. A single bed room apartment would require at least 3 detectors and it costs around $390 to install Nest Protects compared to 90$ required for conventional smoke detectors. Though Nest Thermostat is an expensive one compared to conventional thermostats, it offered great savings through its intelligent auto-away feature. Users were able to able to see returns on their investments. If Nest Protect also can provide good return on investment the it will be very successful.

    Read the article

  • Caching factory design

    - by max
    I have a factory class XFactory that creates objects of class X. Instances of X are very large, so the main purpose of the factory is to cache them, as transparently to the client code as possible. Objects of class X are immutable, so the following code seems reasonable: # module xfactory.py import x class XFactory: _registry = {} def get_x(self, arg1, arg2, use_cache = True): if use_cache: hash_id = hash((arg1, arg2)) if hash_id in _registry: return _registry[hash_id] obj = x.X(arg1, arg2) _registry[hash_id] = obj return obj # module x.py class X: # ... Is it a good pattern? (I know it's not the actual Factory Pattern.) Is there anything I should change? Now, I find that sometimes I want to cache X objects to disk. I'll use pickle for that purpose, and store as values in the _registry the filenames of the pickled objects instead of references to the objects. Of course, _registry itself would have to be stored persistently (perhaps in a pickle file of its own, in a text file, in a database, or simply by giving pickle files the filenames that contain hash_id). Except now the validity of the cached object depends not only on the parameters passed to get_x(), but also on the version of the code that created these objects. Strictly speaking, even a memory-cached object could become invalid if someone modifies x.py or any of its dependencies, and reloads it while the program is running. So far I ignored this danger since it seems unlikely for my application. But I certainly cannot ignore it when my objects are cached to persistent storage. What can I do? I suppose I could make the hash_id more robust by calculating hash of a tuple that contains arguments arg1 and arg2, as well as the filename and last modified date for x.py and every module and data file that it (recursively) depends on. To help delete cache files that won't ever be useful again, I'd add to the _registry the unhashed representation of the modified dates for each record. But even this solution isn't 100% safe since theoretically someone might load a module dynamically, and I wouldn't know about it from statically analyzing the source code. If I go all out and assume every file in the project is a dependency, the mechanism will still break if some module grabs data from an external website, etc.). In addition, the frequency of changes in x.py and its dependencies is quite high, leading to heavy cache invalidation. Thus, I figured I might as well give up some safety, and only invalidate the cache only when there is an obvious mismatch. This means that class X would have a class-level cache validation identifier that should be changed whenever the developer believes a change happened that should invalidate the cache. (With multiple developers, a separate invalidation identifier is required for each.) This identifier is hashed along with arg1 and arg2 and becomes part of the hash keys stored in _registry. Since developers may forget to update the validation identifier or not realize that they invalidated existing cache, it would seem better to add another validation mechanism: class X can have a method that returns all the known "traits" of X. For instance, if X is a table, I might add the names of all the columns. The hash calculation will include the traits as well. I can write this code, but I am afraid that I'm missing something important; and I'm also wondering if perhaps there's a framework or package that can do all of this stuff already. Ideally, I'd like to combine in-memory and disk-based caching.

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • In the Firing Line: The impact of project and portfolio performance on the CEO

    - by Melissa Centurio Lopes
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} What are the primary measurements for rating CEO performance? For corporate boards, business analysts, investors, and the trade press the metrics they deploy are relatively binary in nature; what is being done to generate earnings, and what is being done to build and sustain high performance? As for the market, interest is primarily aroused when operational and financial performance falls outside planned commitments for the year. When organizations announce better than predicted results, they usually experience an immediate increase in share price. Likewise, poor results have an obviously negative impact on the share price and impact the role and tenure of the incumbent CEO. The danger for the CEO is that the risk of failure is ever present, ranging from manufacturing delays and supply chain issues to labor shortages and scope creep. This risk is enhanced by the involvement of secondary suppliers providing services critical to overall work schedules, and magnified further across a portfolio of programs and projects underway at any one time – and all set within a global context. All can impact planned return on investment and have an inevitable impact on the share price – the primary empirical measure of day-to-day performance. Read this complete complementary report, In the Firing Line and explore what is the direct link between the health of the portfolio and CEO performance. This report will provide an overview of the responsibility the CEO has for implementing and maintaining a culture of accountability, offer examples of some of the higher profile project failings in recent years, and detail the capabilities available to the CEO to mitigate the risks residing in their own portfolios. Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Getting browser to make an AJAX call ASAP, while page is still loading

    - by Chris
    I'm looking for tips on how to get the browser to kick off an AJAX call as soon as possible into processing a web page, ideally before the page is fully downloaded. Here's my approximate motivation: I have a web page that takes, say, 5 seconds to load. It calls a web service that takes, say, 10 seconds to load. If loading the page and calling the web service happened sequentially, the user would have to wait 15 seconds to see all the information. However, if I can get the web service call started before the 5 second page load is complete, then at least some of the work can happened in parallel. Ideally I'd like as much of the work to happen in parallel as possible. My initial theory was that I should place the AJAX-calling javascript as high up as possible in the web page HTML source (being mindful of putting it after the jquery.js include, because I'm making the call using jquery ajax). I'm also being mindful not to wrap the AJAX call in a jquery ready event handler. (I mention this because ready events are popular in a lot of jquery example code.) However, the AJAX call still doesn't seem to get kicked off as early as I'm hoping (at least as judged by the Google Chrome "Timeline" feature), so I'm wondering what other considerations apply. One thing that might potentially be detrimental is that the AJAX call is back to the same web server that's serving the original web page, so I might be in danger of hitting a browser limit on the # of HTTP connections back to that one machine? (The HTML page loads a number of images, css files, etc..)

    Read the article

  • Need advice on OOP philosophy

    - by David Jenings
    I'm trying to get the wheels turning on a large project in C#. My previous experience is in Delphi, where by default every form was created at applicaton startup and form references where held in (gasp) global variables. So I'm trying to adapt my thinking to a 100% object oriented environment, and my head is spinning just a little. My app will have a large collection of classes Most of these classes will only really need one instance. So I was thinking: static classes. I'm not really sure why, but much of what I've read here says that if my class is going to hold a state, which I take to mean any property values at all, I should use a singleton structure instead. Okay. But there are people out there who for reasons that escape me, think that singletons are evil too. None of these classes is in danger of being used anywhere except in this program. So they could certainly work fine as regular objects (vs singletons or static classes) Then there's the issue of interaction between objects. I'm tempted to create a Global class full of public static properties referencing the single instances of many of these classes. I've also considered just making them properties (static or instance, not sure which) of the MainForm. Then I'd have each of my classes be aware of the MainForm as Owner. Then the various objects could refer to each other as Owner.Object1, Owner.Object2, etc. I fear I'm running out of electronic ink, or at least taxing the patience of anyone kind enough to have stuck with me this long. I hope I have clearly explained my state of utter confusion. I'm just looking for some advice on best practices in my situation. All input is welcome and appreciated. Thanks in advance, David Jennings

    Read the article

  • Managing lots of callback recursion in Nodejs

    - by Maciek
    In Nodejs, there are virtually no blocking I/O operations. This means that almost all nodejs IO code involves many callbacks. This applies to reading and writing to/from databases, files, processes, etc. A typical example of this is the following: var useFile = function(filename,callback){ posix.stat(filename).addCallback(function (stats) { posix.open(filename, process.O_RDONLY, 0666).addCallback(function (fd) { posix.read(fd, stats.size, 0).addCallback(function(contents){ callback(contents); }); }); }); }; ... useFile("test.data",function(data){ // use data.. }); I am anticipating writing code that will make many IO operations, so I expect to be writing many callbacks. I'm quite comfortable with using callbacks, but I'm worried about all the recursion. Am I in danger of running into too much recursion and blowing through a stack somewhere? If I make thousands of individual writes to my key-value store with thousands of callbacks, will my program eventually crash? Am I misunderstanding or underestimating the impact? If not, is there a way to get around this while still using Nodejs' callback coding style?

    Read the article

  • Using a version control system as a data backend

    - by JacobM
    I'm involved in a project that, among other things, involves storing edits and changes to a large hierarchical document (HTML-formatted text). We want to include versioning of textual changes and of structural changes. Currently we're maintaining the tree of document sections in a relational database, but as we start working on how to manage versioning of structural changes, it's clear that we're in danger of having to write a lot of the functionality that a version control system provides. We don't want to reinvent the wheel. Is it possible that we could use an existing version control system as the data store, at least for the document itself? Presumably we could do so by writing out new versions to the filesystem, and keeping that directory under version control (and programmatically doing commits and so forth) but it would be better if we could directly interact with the repository via code. The VCS that we are most familiar with is Subversion, but I'm not thrilled with how Subversion represents changes to the directory structure -- it would be nice if we could see that a particular revision included moving a section from Chapter 2 to Chapter 6, rather than just seeing a new version of the tree. This sounds more like the way a system like Mercurial handles changes to the structure. Any advice? Do VCS's have public APIs and so forth? The project is in Java (with Spring) if it matters.

    Read the article

  • Splitting MS Access Database - Front End Part Location

    - by kristof
    One of the best practices as specified by Microsoft for Access Development is splitting Access application into 2 parts; Front End that hold all the object except tables and the Back End that holds the tables. The msdn page links there to the article Splitting Microsoft Access Databases to Improve Performance and Simplify Maintainability that describes the process in details. It is recommended that in multi user environment the Back End is stored on the server/shared folder while the Front End is distributed to each user. That implies that each time there are any changes made to the front end they need to be deployed to every user machine. My question is: Assuming that the users themselves do not have rights to modify the Front End part of the application what would be the drawbacks/dangers of leaving this on the server as well next to the Back End copy? I can see the performance issues here, but are there any dangers here like possible corruptions etc? Thank you EDIT Just to clarify, the scenario specified in question assumes one Front End stored on the server and shared by users. I understand that the recommendation is to have FE deployed to each user machine, but my question is more about what are the dangers if that is not done. E.g. when you are given an existing solution that uses the approach of both FE and BE on the server. Assuming the the performance is acceptable and the customer is reluctant to change the approach would you still push the change? And why exactly? For example the danger of possible data corruption would definitely be the strong enough argument, but is that the case? It is a part of follow up of my previous question From SQL Server to MS Access 2007

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • Passing youtube video id from video feed to flash

    - by Grant Anderson
    I'm working on a flash web application (Actionscript 2.0) for my honours project but am having trouble embedding youtube videos. Basically the user selects symbols which queries the youtube api with certain tags depending on the symbols chosenand a random video is then picked from the first 30 videos. I have this working using the following code: on (release) { url="http://gdata.youtube.com/feeds/api/videos?q=danger+passion&orderby=published&start-index="+random(30)+"&max-results=1&v=2" getURL(url); } but this just displays a webpage with a link to the youtube video. This is the code I'll be using as the foundations for the player: // create a MovieClip to load the player into var ytplayer:MovieClip = _root.createEmptyMovieClip("ytplayer", 1); // create a listener object for the MovieClipLoader to use var ytPlayerLoaderListener:Object = { onLoadInit: function() { // When the player clip first loads, we start an interval to // check for when the player is ready loadInterval = setInterval(checkPlayerLoaded, 250); } }; var loadInterval:Number; function checkPlayerLoaded():Void { // once the player is ready, we can subscribe to events, or in the case of // the chromeless player, we could load videos if (ytplayer.isPlayerLoaded()) { ytplayer.addEventListener("onStateChange", onPlayerStateChange); ytplayer.addEventListener("onError", onPlayerError); clearInterval(loadInterval); } } function onPlayerStateChange(newState:Number) { trace("New player state: "+ newState); } function onPlayerError(errorCode:Number) { trace("An error occurred: "+ errorCode); } // create a MovieClipLoader to handle the loading of the player var ytPlayerLoader:MovieClipLoader = new MovieClipLoader(); ytPlayerLoader.addListener(ytPlayerLoaderListener); // load the player ytPlayerLoader.loadClip("http://www.youtube.com/v/pv5zWaTEVkI", ytplayer); can anyone help me on how to get the id of the video (for example: pv5zWaTEVkI) from the feed? Or how to send a query from flash which will return the video url/id as opposed to the url of a feed. Any help would be much appreciated as my hand in rather soon. Thanks

    Read the article

  • Redirect Desktop Internal Pages to Correct Mobile Internal Pages with Htaccess

    - by Luis Alejandro Ramrez Gallardo
    I have built a Mobile site in a sub-domain. I have successfully implemented the redirect 302 from: www.domain.com to m.domain.com in htaccess. What I'm looking to achieve now it to redirect users from: www.domain.com/internal-page/ > 302 > m.domain.com/internal-page.html Notice that URL name for desktop and mobile is not the same. The code I'm using looks like this: # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress # Mobile Redirect # Verify Desktop Version Parameter RewriteCond %{QUERY_STRING} (^|&)ViewFullSite=true(&|$) # Set cookie and expiration RewriteRule ^ - [CO=mredir:0:www.domain.com:60] # Prevent looping RewriteCond %{HTTP_HOST} !^m.domain.com$ # Define Mobile agents RewriteCond %{HTTP_ACCEPT} "text\/vnd\.wap\.wml|application\/vnd\.wap\.xhtml\+xml" [NC,OR] RewriteCond %{HTTP_USER_AGENT} "sony|symbian|nokia|samsung|mobile|windows ce|epoc|opera" [NC,OR] RewriteCond %{HTTP_USER_AGENT} "mini|nitro|j2me|midp-|cldc-|netfront|mot|up\.browser|up\.link|audiovox"[NC,OR] RewriteCond %{HTTP_USER_AGENT} "blackberry|ericsson,|panasonic|philips|sanyo|sharp|sie-"[NC,OR] RewriteCond %{HTTP_USER_AGENT} "portalmmm|blazer|avantgo|danger|palm|series60|palmsource|pocketpc"[NC,OR] RewriteCond %{HTTP_USER_AGENT} "smartphone|rover|ipaq|au-mic,|alcatel|ericy|vodafone\/|wap1\.|wap2\.|iPhone|android"[NC] # Verify if not already in Mobile site RewriteCond %{HTTP_HOST} !^m\. # We need to read and write at the same time to set cookie RewriteCond %{QUERY_STRING} !(^|&)ViewFullSite=true(&|$) # Verify that we previously haven't set the cookie RewriteCond %{HTTP_COOKIE} !^.*mredir=0.*$ [NC] # Now redirect the users to the Mobile Homepage RewriteRule ^$ http://m.domain.com [R] RewriteRule $/internal-page/ http://m.domain.com/internal-page.html [R,L]

    Read the article

  • Should I define a single "DataContext" and pass references to it around or define muliple "DataConte

    - by Nate Bross
    I have a Silverlight application that consists of a MainWindow and several classes which update and draw images on the MainWindow. I'm now expanding this to keep track of everything in a database. Without going into specifics, lets say I have a structure like this: MainWindow Drawing-Surface Class1 -- Supports Drawing DataContext + DataServiceCollection<T> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) Class3 Each "Class" is passed a reference to the Drawing Surface so they can interact with it independently. I'm starting to use WCF Data Services in Class1 and its working well; however, the other classes are also going to need access to the WCF Data Services. (Should I define my "DataContext" in MainWindow and pass a reference to each child class?) Class1 will need READ access to the "transactions" data, and Class2 will need READ access to some of the drawing data. So my question is, where does it make the most sense to define my DataContext? Does it make sense to: Define a "global" WCF Data Service "Context" object and pass references to that in all of my subsequent classes? Define an instance of the "Context" for each Class1, Class2, etc Have each method that requires access to data define its own instance of the "Context" and use closures handle the async load/complete events? Would a structure like this make more sense? Is there any danger in keeping an active "DataContext" open for an extended period of time? Typical usecase of this application could range from 1 minute to 40+ minutes. MainWindow Drawing-Surface DataContext Class1 -- Supports Drawing DataServiceCollection<DrawingType> w/events Class2 -- Manages "transactions" (add/delete objects from drawing) DataServiceCollection<TransactionType> w/events Class3 DataServiceCollection<T> w/events

    Read the article

  • How to write a good PHP database insert using an associative array

    - by Tom
    In PHP, I want to insert into a database using data contained in a associative array of field/value pairs. Example: $_fields = array('field1'=>'value1','field2'=>'value2','field3'=>'value3'); The resulting SQL insert should look as follows: INSERT INTO table (field1,field2,field3) VALUES ('value1','value2','value3'); I have come up with the following PHP one-liner: mysql_query("INSERT INTO table (".implode(',',array_keys($_fields)).") VALUES (".implode(',',array_values($_fields)).")"); It separates the keys and values of the the associative array and implodes to generate a comma-separated string . The problem is that it does not escape or quote the values that were inserted into the database. To illustrate the danger, Imagine if $_fields contained the following: $_fields = array('field1'=>"naustyvalue); drop table members; --"); The following SQL would be generated: INSERT INTO table (field1) VALUES (naustyvalue); drop table members; --; Luckily, multiple queries are not supported, nevertheless quoting and escaping are essential to prevent SQL injection vulnerabilities. How do you write your PHP Mysql Inserts? Note: PDO or mysqli prepared queries aren't currently an option for me because the codebase already uses mysql extensively - a change is planned but it'd take alot of resources to convert?

    Read the article

  • Exception handling in Spring MVC with 3 layer architecture

    - by Chorochrondochor
    I am building a simple web applications with 3 layers - DAO, Service, MVC. When in my Controller I want to delete menu group and it contains menus I am getting ConstraintViolationException. Where should I handle this exception? In DAO, Service, or in Controller? Currently I am handling the exception in Controller. My code below. DAO method for deleting menu groups: @Override public void delete(E e){ if (e == null){ throw new DaoException("Entity can't be null."); } getCurrentSession().delete(e); } Service method for deleting menu groups: @Override @Transactional(readOnly = false) public void delete(MenuGroupEntity menuGroupEntity) { menuGroupDao.delete(menuGroupEntity); } Controller method for deleting menu groups in Controller: @RequestMapping(value = "/{menuGroupId}/delete", method = RequestMethod.GET) public ModelAndView delete(@PathVariable Long menuGroupId, RedirectAttributes redirectAttributes){ MenuGroupEntity menuGroupEntity = menuGroupService.find(menuGroupId); if (menuGroupEntity != null){ try { menuGroupService.delete(menuGroupEntity); redirectAttributes.addFlashAttribute("flashMessage", "admin.menu-group-deleted"); redirectAttributes.addFlashAttribute("flashMessageType", "success"); } catch (Exception e){ redirectAttributes.addFlashAttribute("flashMessage", "admin.menu-group-could-not-be-deleted"); redirectAttributes.addFlashAttribute("flashMessageType", "danger"); } } return new ModelAndView("redirect:/admin/menu-group"); }

    Read the article

  • Capturing output of find . -print0 into a bash array

    - by Idris
    Using find . -print0 seems to be the only safe way of obtaining a list of files in bash due to the possibility of filenames containing spaces, newlines, quotation marks etc. However, I'm having a hard time actually making find's output useful within bash or with other command line utilities. The only way I have managed to make use of the output is by piping it to perl, and changing perl's IFS to null: find . -print0 | perl -e '$/="\0"; @files=<>; print $#files;' This example prints the number of files found, avoiding the danger of newlines in filenames corrupting the count, as would occur with: find . | wc -l As most command line programs do not support null-delimited input, I figure the best thing would be to capture the output of find . -print0 in a bash array, like I have done in the perl snippet above, and then continue with the task, whatever it may be. How can I do this? This doesn't work: find . -print0 | ( IFS=$'\0' ; array=( $( cat ) ) ; echo ${#array[@]} ) A much more general question might be: How can I do useful things with lists of files in bash?

    Read the article

  • Using an initializer_list on a map of vectors

    - by Hooked
    I've been trying to initialize a map of <ints, vector<ints> > using the new 0X standard, but I cannot seem to get the syntax correct. I'd like to make a map with a single entry with key:value = 1:<3,4 #include <initializer_list> #include <map> #include <vector> using namespace std; map<int, vector<int> > A = {1,{3,4}}; .... It dies with the following error using gcc 4.4.3: error: no matching function for call to std::map<int,std::vector<int,std::allocator<int> >,std::less<int>,std::allocator<std::pair<const int,std::vector<int,std::allocator<int> > > > >::map(<brace-enclosed initializer list>) Edit Following the suggestion by Cogwheel and adding the extra brace it now compiles with a warning that can be gotten rid of using the -fno-deduce-init-list flag. Is there any danger in doing so?

    Read the article

  • Best way to use Google's hosted jQuery, but fall back to my hosted library on Google fail

    - by Nosredna
    What would be a good way to attempt to load the hosted jQuery at Google (or other Google hosted libs), but load my copy of jQuery if the Google attempt fails? I'm not saying Google is flaky. There are cases where the Google copy is blocked (apparently in Iran, for instance). Would I set up a timer and check for the jQuery object? What would be the danger of both copies coming through? Not really looking for answers like "just use the Google one" or "just use your own." I understand those arguments. I also understand that the user is likely to have the Google version cached. I'm thinking about fallbacks for the cloud in general. Edit: This part added... Since Google suggests using google.load to load the ajax libraries, and it performs a callback when done, I'm wondering if that's the key to serializing this problem. I know it sounds a bit crazy. I'm just trying to figure out if it can be done in a reliable way or not. Update: jQuery now hosted on Microsoft's CDN. http://www.asp.net/ajax/cdn/

    Read the article

  • How can I avoid encoding mixups of strings in a C/C++ API?

    - by Frerich Raabe
    I'm working on implementing different APIs in C and C++ and wondered what techniques are available for avoiding that clients get the encoding wrong when receiving strings from the framework or passing them back. For instance, imagine a simple plugin API in C++ which customers can implement to influence translations. It might feature a function like this: const char *getTranslatedWord( const char *englishWord ); Now, let's say that I'd like to enforce that all strings are passed as UTF-8. Of course I'd document this requirement, but I'd like the compiler to enforce the right encoding, maybe by using dedicated types. For instance, something like this: class Word { public: static Word fromUtf8( const char *data ) { return Word( data ); } const char *toUtf8() { return m_data; } private: Word( const char *data ) : m_data( data ) { } const char *m_data; }; I could now use this specialized type in the API: Word getTranslatedWord( const Word &englishWord ); Unfortunately, it's easy to make this very inefficient. The Word class lacks proper copy constructors, assignment operators etc.. and I'd like to avoid unnecessary copying of data as much as possible. Also, I see the danger that Word gets extended with more and more utility functions (like length or fromLatin1 or substr etc.) and I'd rather not write Yet Another String Class. I just want a little container which avoids accidental encoding mixups. I wonder whether anybody else has some experience with this and can share some useful techniques. EDIT: In my particular case, the API is used on Windows and Linux using MSVC 6 - MSVC 10 on Windows and gcc 3 & 4 on Linux.

    Read the article

  • Releasing from development into production in maven

    - by Bruce
    Hi all, I'm confused about the use of maven in development and production environments - I'm sure it's something simple that I'm missing. Grateful for any help.. I set up maven inside eclipse on my local machine and wrote some software. I really like how it's made things like including dependent jars very easy. So that's my development environment. But now I want to release the project to production on a remote server. I've searched the documentation, but I can't figure out how it's supposed to work or what the maven best practice is.. Are you supposed to: a) Also be running maven on your production environment, and upload all your files to your production environment and rebuild your project there? (Something in me baulks at the idea of rebuilding 'released' code on the production server, so I'm fairly sure this isn't right..) b) use mvn:package to create your jar file and then copy that up to production? (But then what of all those nice dependencies? Isn't there a danger that your tested code is now going to be running against different versions of the dependent jars in the production environment, possibly breaking your code? Or missing a jar..?) c) Something else that I'm not figuring out.. Thanks in advance for any help!

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26  | Next Page >