Search Results

Search found 3368 results on 135 pages for 'smart quotes'.

Page 119/135 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • When asked "How do I make a website?" how do you answer?

    - by Luke CK
    A (non-technical) friend of mine has asked me how to make a website. I get this question all the time. After a few questions I found out that she has an idea that could turn into a commercial site. I described three options to her: a) Get a book/enroll in a class/follow some online tutorials and learn how to do it. She's pretty smart and her personality seems like a good match for this sort of thing so I'm sure she could learn but she doesn't have a lot of time spare. Maybe if she started with one of those WYSIWYG editors at first? I stressed that this would take a longer than a couple of weekends of playing around. b) Hire someone to build it. Ranges from ultra cheap to ultra expensive, crappy to good and everything in between. I didn't mention sites like Rentacoder because she hasn't worked on a project like this before and doesn't know what to ask for. At this stage she'd likely ask for a Youtube-MySpace-Google for a few hundred bucks because she doesn't yet understand just how much is involved. c) Find someone technical and partner up with them. I explained that this can either work really well or be a disaster because she'd have to give up some of her ownership of the idea. How do you respond in these situations?

    Read the article

  • Is there a way to create subdatabases as a kind of subfolders in sql server?

    - by user193655
    I am creating an application where there is main DB and where other data is stored in secondary databases. The secondary databases follow a "plugin" approach. I use SQL Server. A simple installation of the application will just have the mainDB, while as an option one can activate more "plug-ins" and for every plug-in there will be a new database. Now why I made this choice is because I have to work with an exisiting legacy system and this is the smartest thing I could figure to implement the plugin system. MainDB and Plugins DB have exactly the same schema (basically Plugins DB have some "special content", some important data that one can use as a kind of template - think to a letter template for example - in the application). Plugin DBs are so used in readonly mode, they are "repository of content". The "smart" thing is that the main application can also be used by "plugin writers", they just write a DB inserting content, and by making a backup of the database they creaetd a potential plugin (this is why all DBs has the same schema). Those plugins DB are downloaded from internet as there is a content upgrade available, every time the full PlugIn DB is destroyed and a new one with the same name is creaetd. This is for simplicity and even because the size of this DBs is generally small. Now this works, anyway I would prefer to organize the DBs in a kind of Tree structure, so that I can force the PlugIn DBs to be "sub-DBs" of the main application DB. As a workaround I am thinking of using naming rules, like: ApplicationDB (for the main application DB) ApplicationDB_PlugIn_N (for the N-th plugin DB) When I search for plugin 1 I try to connect to ApplicationDB_PlugIn_1, if I don't find the DB i raise an error. This situation can happen for example if som DBA renamed ApplicationDB_Plugin_1. So since those Plugin DBs are really dependant on ApplicationDB only I was trying to "do the subfolder trick". Can anyone suggest a way to do this? Can you comment on this self-made plugin approach I decribed above?

    Read the article

  • Best way to not update empty posts

    - by user1533106
    Hello, Im using codeigniter, and the page in case just update infos about an user. If the user go to the page and edit values and that posts come as "" or empty (samething) then no update it let the query pass it, i got a logic but its a bit ugly and ill take alot of time. $nome = "'nome' =>" . $this->input->post('nome') . "'"; $sobrenome = "'sobrenome' =>" . $this->input->post('sobrenome') . "'"; if($nome != ""){ $nome = "'nome' =>" . $this->input->post('nome') . "'"; }else{ $nome = ""; } if($sobrenome != ""){ $sobrenome = "'sobrenome' =>" . $this->input->post('sobrenome') . "'"; }else{ $sobrenome = ""; } $data = array($nome, $sobrenome); The problem is, i got alot of fields :( If anyone know a smart way or a better way, please i want know

    Read the article

  • Write file need to optimised for heavy traffic part 2

    - by Clayton Leung
    For anyone interest to see where I come from you can refer to part 1, but it is not necessary. write file need to optimised for heavy traffic Below is a snippet of code I have written to capture some financial tick data from the broker API. The code will run without error. I need to optimize the code, because in peak hours the zf_TickEvent method will be call more than 10000 times a second. I use a memorystream to hold the data until it reaches a certain size, then I output it into a text file. The broker API is only single threaded. void zf_TickEvent(object sender, ZenFire.TickEventArgs e) { outputString = string.Format("{0},{1},{2},{3},{4}\r\n", e.TimeStamp.ToString(timeFmt), e.Product.ToString(), Enum.GetName(typeof(ZenFire.TickType), e.Type), e.Price, e.Volume); fillBuffer(outputString); } public class memoryStreamClass { public static MemoryStream ms = new MemoryStream(); } void fillBuffer(string outputString) { byte[] outputByte = Encoding.ASCII.GetBytes(outputString); memoryStreamClass.ms.Write(outputByte, 0, outputByte.Length); if (memoryStreamClass.ms.Length > 8192) { emptyBuffer(memoryStreamClass.ms); memoryStreamClass.ms.SetLength(0); memoryStreamClass.ms.Position = 0; } } void emptyBuffer(MemoryStream ms) { FileStream outStream = new FileStream("c:\\test.txt", FileMode.Append); ms.WriteTo(outStream); outStream.Flush(); outStream.Close(); } Question: Any suggestion to make this even faster? I will try to vary the buffer length but in terms of code structure, is this (almost) the fastest? When memorystream is filled up and I am emptying it to the file, what would happen to the new data coming in? Do I need to implement a second buffer to hold that data while I am emptying my first buffer? Or is c# smart enough to figure it out? Thanks for any advice

    Read the article

  • Overhead of calling tiny functions from a tight inner loop? [C++]

    - by John
    Say you see a loop like this one: for(int i=0; i<thing.getParent().getObjectModel().getElements(SOME_TYPE).count(); ++i) { thing.getData().insert( thing.GetData().Count(), thing.getParent().getObjectModel().getElements(SOME_TYPE)[i].getName() ); } if this was Java I'd probably not think twice. But in performance-critical sections of C++, it makes me want to tinker with it... however I don't know if the compiler is smart enough to make it futile. This is a made up example but all it's doing is inserting strings into a container. Please don't assume any of these are STL types, think in general terms about the following: Is having a messy condition in the for loop going to get evaluated each time, or only once? If those get methods are simply returning references to member variables on the objects, will they be inlined away? Would you expect custom [] operators to get optimized at all? In other words is it worth the time (in performance only, not readability) to convert it to something like: ElementContainer &source = thing.getParent().getObjectModel().getElements(SOME_TYPE); int num = source.count(); Store &destination = thing.getData(); for(int i=0;i<num;++i) { destination.insert(thing.GetData().Count(), source[i].getName(); } Remember, this is a tight loop, called millions of times a second. What I wonder is if all this will shave a couple of cycles per loop or something more substantial? Yes I know the quote about "premature optimisation". And I know that profiling is important. But this is a more general question about modern compilers, Visual Studio in particular.

    Read the article

  • Extract information from a Func<bool, T> or alike lambda

    - by Syska
    I''m trying to build a generic cache layer. ICacheRepository Say I have the following: public class Person { public int PersonId { get; set; } public string Firstname { get; set; } public string Lastname { get; set; } public DateTime Added { get; set; } } And I have something like this: list.Where(x => x.Firstname == "Syska"); Here I want to extract the above information, to see if the query supplied the "PersonId" which it did not, so I dont want to cache it. But lets say I run a query like this: list.Where(x => x.PersonId == 10); Since PersonId is my key ... I want to cache it. with the key like "Person_10" and I later can fetch it from the cache. I know its possible to extract the information with Expression<Func<>> but there seems to be a big overhead of doing this (when running compile and extract the Constant values etc. and a bunch of cache to be sure to parse right) Are there a framework for this? Or some smart/golden way of doing this ?

    Read the article

  • What makes my code DDD (domain-driven design) qualified?

    - by oykuo
    Hi All, I'm new to DDD and am thinking about using this design technique in my project. However, what strikes me about DDD is that how basic the idea is. Unlike other design techniques such as MVC and TDD, it doesn't seems to contain any ground breaking ideas. For example, I'm sure some of you will have the same feeling that the idea of root aggregates and repositories are nothing new because when you are was writing MVC web applications you have to have one single master object (i.e. the root aggregate) that contain other minor objects (i.e. value objects and entities) in the model layer in order to send data to a strongly typed view. To me, the only new idea in DDD is probably the "Smart" entities (i.e. you are supposed to have business rules on root aggregates) Separation between value object, root aggregate and entities. Can anyone tell me if I have missed out anything here? If that's all there is to DDD, if I update one of my existing MVC application with the above 2 new ideas, can I claim it's an TDD, MVC and DDD applcation?

    Read the article

  • SQL: Limit rows linked to each joined row

    - by SolidSnakeGTI
    Hello, I've certain situation that requires certain result set from MySQL query, let's see the current query first & then ask my question: SELECT thread.dateline AS tdateline, post.dateline AS pdateline, MIN(post.dateline) FROM thread AS thread LEFT JOIN post AS post ON(thread.threadid = post.threadid) LEFT JOIN forum AS forum ON(thread.forumid = forum.forumid) WHERE post.postid != thread.firstpostid AND thread.open = 1 AND thread.visible = 1 AND thread.replycount >= 1 AND post.visible = 1 AND (forum.options & 1) AND (forum.options & 2) AND (forum.options & 4) AND forum.forumid IN(1,2,3) GROUP BY post.threadid ORDER BY tdateline DESC, pdateline ASC As you can see, mainly I need to select dateline of threads from 'thread' table, in addition to dateline of the second post of each thread, that's all under the conditions you see in the WHERE CLAUSE. Since each thread has many posts, and I need only one result per thread, I've used GROUP BY CLAUSE for that purpose. This query will return only one post's dateline with it's related unique thread. My questions are: How to limit returned threads per each forum!? Suppose I need only 5 threads -as a maximum- to be returned for each forum declared in the WHERE CLAUSE 'forum.forumid IN(1,2,3)', how can this be achieved. Is there any recommendations for optimizing this query (of course after solving the first point)? Notes: I prefer not to use sub-queries, but if it's the only solution available I'll accept it. Double queries not recommended. I'm sure there's a smart solution for this situation. I'm using MySQL 4.1+, but if you know the answer for another engine, just share. Appreciated advice in advance :)

    Read the article

  • Tools for debugging when debugger can't get you there?

    - by brian1001
    I have a fairly complex (approx 200,000 lines of C++ code) application that has decided to crash, although it crashes a little differently on a couple of different systems. The trick is that it doesn't crash or trap out in debugger. It only crashes when the application .EXE is run independently (either the debug EXE or the release EXE - both behave the same way). When it crashes in the debug EXE, and I get it to start debugging, the call stack is buried down into the windows/MFC part of things, and isn't reflecting any of my code. Perhaps I'm seeing a stack corruption of some sort, but I'm just not sure at the moment. My question is more general - it's about tools and techniques. I'm an old programmer (C and assembly language days), and a relative newcomer (couple/few years) to C++ and Visual Studio (2003 for this projecT). Are there tricks or techniques anyone's had success with in tracking down crashing issues when you cannot make the software crash in a debugger session? Stuff like permission issues, for example? The only thing I've thought of is to start plugging in debug/status messages to a logfile, but that's a long, hard way to go. Been there, done that. Any better suggestions? Am I missing some tools that would help? Is VS 2008 better for this kind of thing? Thanks for any guidance. Some very smart people here (you know who you are!). cheers.

    Read the article

  • How to calculate the y-pixels of someones weight on a graph? (math+programming question)

    - by RexOnRoids
    I'm not that smart like some of you geniuses. I need some help from a math whiz. My app draws a graph of the users weight over time. I need a surefire way to always get the right pixel position to draw the weight point at for a given weight. For example, say I want to plot the weight 80.0(kg) on the graph when the range of weights is 80.0 to 40.0kg. I want to be able to plug in the weight (given I know the highest and lowest weights in the range also) and get the pixel result 400(y) (for the top of the graph). The graph is 300 pixels high (starts at 100 and ends at 400). The highest weight 80kg would be plot at 400 while the lowest weight 40kg would be plot at 100. And the intermediate weights should be plotted appropriately. I tried this but it does not work: -(float)weightToPixel:(float)theWeight { float graphMaxY = 400; //The TOP of the graph float graphMinY = 100; //The BOTTOM of the graph float yOffset = 100; //Graph itself is offset 100 pixels in the Y direction float coordDiff = graphMaxY-graphMinY; //The size in pixels of the graph float weightDiff = self.highestWeight-self.lowestWeight; //The weight gap float pixelIncrement = coordDiff/weightDiff; float weightY = (theWeight*pixelIncrement)-(coordDiff-yOffset); //The return value return weightYpixel; }

    Read the article

  • PHP - How do you secure a unique variable name?

    - by 102319141763223461745
    This function cropit, which I shamelessly stole off the internet, crops a 90x60 area from an existing image. In this code, when I use the function for more than one item (image) the one will display on top of the other (they come to occupy the same output space). I think this is because the function has the same (static) name ($dest) for the destination of the image when it's created (imagecopy). I tried, as you can see to include a second argument to the cropit function which would serve as the "name" of the $dest variable, but it didn't work. In the interest of full disclosure I have 22 hours of PHP experience (incidentally the same number of hours since the last I slept) and I am not that smart to begin with. Even if there's something else at work here entirely, seems to me that generally it must be useful to have a way to secure that a variable is always given a unique name. function cropit($srcimg, $dest) { $im = imagecreatefromjpeg($srcimg); $img_width = imagesx($im); $img_height = imagesy($im); $width = 90; $height = 60; $tlx = floor($img_width / 2) - floor ($width / 2); $tly = floor($img_height / 2) - floor ($height / 2); if ($tlx < 0) { $tlx = 0; } if ($tly < 0) { $tly = 0; } if (($img_width - $tlx) < $width) { $width = $img_width - $tlx; } if (($img_height - $tly) < $height) { $height = $img_height - $tly; } $dest = imagecreatetruecolor ($width, $height); imagecopy($dest, $im, 0, 0, $tlx, $tly, $width, $height); imagejpeg($dest); imagedestroy($dest); } $img = "imagefolder\imageone.jpg"; $img2 = "imagefolder\imagetwo.jpg"; cropit($img, $i1); cropit($img2, $i2); ?

    Read the article

  • Problem with waitable timers in Windows (timeSetEvent and CreateTimerQueueTimer)

    - by MusiGenesis
    I need high-resolution (more accurate than 1 millisecond) timing in my application. The waitable timers in Windows are (or can be made) accurate to the millisecond, but if I need a precise periodicity of, say, 35.7142857141 milliseconds, even a waitable timer with a 36 ms period will drift out of sync quickly. My "solution" to this problem (in ironic quotes because it's not working quite right) is to use a series of one-shot timers where I use the expiration of each timer to call the next timer. Normally a process like this would be subject to cumulative error over time, but in each timer callback I check the current time (with System.Diagnostics.Stopwatch) and use this to calculate what the period of the next timer needs to be (so if a timer happens to expire a little late, the next timer will automagically have a shorter period to compensate). This works as expected, except that after maybe 10-15 seconds the timer system seems to get bogged down, and a few timer callbacks here and there arrive anywhere from 25 to 100 milliseconds late. After a couple of seconds the problem goes away and everything runs smoothly again for 10-15 seconds, and then the stuttering again. Since I'm using Stopwatch to set each timer period, I'm also using it to monitor the arrival times of each timer callback. During the smooth-running periods, most (maybe 95%) of the intervals are either 35 or 36 milliseconds, and no intervals are ever more than 5 milliseconds away from the expected 35.7142857143. During the "glitchy" stretches, the distribution of intervals is very nearly identical, except that a very small number are unusually large (a couple more than 60 ms and one or two longer than 100 ms during maybe a 3-second stretch). This stuttering is very noticeable, and it's what I'm trying to fix, if possible. For the high-resolution timer, I was using the extremely antique timeSetEvent() multimedia timer from winmm.dll. In pursuit of this problem, I switched to using CreateTimerQueueTimer (along with timeBeginPeriod to set the high-resolution), but I'm seeing the same problem with both timer mechanisms. I've tried experimenting with the various flags for CreateTimerQueueTimer which determine which thread the timer runs on, but the stuttering appears no matter what. Is this just a fundamental problem with using timers in this way (i.e. using each one-shot timer to call the next)? If so, do I have any alternatives? One thing I was considering was to determine how many consecutive 1-millisecond-accuracy ticks would keep my within some arbitrary precision limit before I need to reset the timer. So, for example, if I wanted a 35.71428 period, I could let a 36 ms timer elapse 15 times before it was off by 5 milliseconds, then kill it and start a new one.

    Read the article

  • 3Ware 9650SE RAID-6, two degraded drives, one ECC, rebuild stuck

    - by cswingle
    This morning I came in the office to discover that two of the drives on a RAID-6, 3ware 9650SE controller were marked as degraded and it was rebuilding the array. After getting to about 4%, it got ECC errors on a third drive (this may have happened when I attempted to access the filesystem on this RAID and got I/O errors from the controller). Now I'm in this state: > /c2/u1 show Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB) ------------------------------------------------------------------------ u1 RAID-6 REBUILDING 4%(A) - - 64K 7450.5 u1-0 DISK OK - - p5 - 931.312 u1-1 DISK OK - - p2 - 931.312 u1-2 DISK OK - - p1 - 931.312 u1-3 DISK OK - - p4 - 931.312 u1-4 DISK OK - - p11 - 931.312 u1-5 DISK DEGRADED - - p6 - 931.312 u1-6 DISK OK - - p7 - 931.312 u1-7 DISK DEGRADED - - p3 - 931.312 u1-8 DISK WARNING - - p9 - 931.312 u1-9 DISK OK - - p10 - 931.312 u1/v0 Volume - - - - - 7450.5 Examining the SMART data on the three drives in question, the two that are DEGRADED are in good shape (PASSED without any Current_Pending_Sector or Offline_Uncorrectable errors), but the drive listed as WARNING has 24 uncorrectable sectors. And, the "rebuild" has been stuck at 4% for ten hours now. So: How do I get it to start actually rebuilding? This particular controller doesn't appear to support /c2/u1 resume rebuild, and the only rebuild command that appears to be an option is one that wants to know what disk to add (/c2/u1 start rebuild disk=<p:-p...> [ignoreECC] according to the help). I have two hot spares in the server, and I'm happy to engage them, but I don't understand what it would do with that information in the current state it's in. Can I pull out the drive that is demonstrably failing (the WARNING drive), when I have two DEGRADED drives in a RAID-6? It seems to me that the best scenario would be for me to pull the WARNING drive and tell it to use one of my hot spares in the rebuild. But won't I kill the thing by pulling a "good" drive in a RAID-6 with two DEGRADED drives? Finally, I've seen reference in other posts to a bad bug in this controller that causes good drives to be marked as bad and that upgrading the firmware may help. Is flashing the firmware a risky operation given the situation? Is it likely to help or hurt wrt the rebuilding-but-stuck-at-4% RAID? Am I experiencing this bug in action? Advice outside the spiritual would be much appreciated. Thanks.

    Read the article

  • How to get MSBuild Exec to run a java program?

    - by Vaccano
    I am trying to run a command line action in my Team Build (MSBuild). When I run it on the command line of the build machine it works fine. But when run in the build script I get a "exited with code 3". This is command that I am running: C:\Program Files\Wavelink\Avalanche\PackageBuilder.\jresdk\bin\java -classpath "WLUtil.jar;WLPackageBuilder.jar" com.wavelink.buildpkg.AvalanchePackageBuilder /build PackageName This command only works when run from the above directory (I have tried running it from c:\ with the full path at it fails). When I try to run it using ms build this is my statement: <PropertyGroup> <!--Working directory of the Package Builder Call--> <PkgBldWorkingDir>&quot;C:\Program Files\Wavelink\Avalanche\PackageBuilder&quot;</PkgBldWorkingDir> <!--Command line to run to make Package builder "go"--> <PkgBldRun>.\jresdk\bin\java&quot; -classpath &quot;WLUtil.jar;WLPackageBuilder.jar&quot; com.wavelink.buildpkg.AvalanchePackageBuilder</PkgBldRun> </PropertyGroup> <!--Run package builder command line to update the Ava File.--> <Exec ContinueOnError="true" WorkingDirectory="$(PackageBuilderWorkingDir)" Command="$(PkgBldRun) /build PackageName"/> As I said above this "exits with code 3". This is the full output: Task "Exec" Command: .\jresdk\bin\java -classpath "WLUtil.jar;WLPackageBuilder.jar" com.wavelink.buildpkg.AvalanchePackageBuilder /build PackageName The system cannot find the path specified. MSBUILD : warning MSB3073: The command ".\jresdk\bin\java -classpath "WLUtil.jar;WLPackageBuilder.jar" com.wavelink.buildpkg.AvalanchePackageBuilder /build PackageName" exited with code 3. The previous error was converted to a warning because the task was called with ContinueOnError=true. Build continuing because "ContinueOnError" on the task "Exec" is set to "true". Done executing task "Exec" -- FAILED. It says it can't find the file (who knows what file). I have tried it with and without the quotes (") in the working directory and with a full path as the command (gives the same error as when run on the command line). Any ideas on how to make this run a command line action in MS Build?

    Read the article

  • Can I force JAXB not to convert " into &quot;, for example, when marshalling to XML?

    - by Elliot
    I have an Object that is being marshalled to XML using JAXB. One element contains a String that includes quotes ("). The resulting XML has &quot; where the " existed. Even though this is normally preferred, I need my output to match a legacy system. How do I force JAXB to NOT convert the HTML entities? -- Thank you for the replies. However, I never see the handler escape() called. Can you take a look and see what I'm doing wrong? Thanks! package org.dc.model; import java.io.IOException; import java.io.Writer; import javax.xml.bind.JAXBContext; import javax.xml.bind.JAXBException; import javax.xml.bind.Marshaller; import org.dc.generated.Shiporder; import com.sun.xml.internal.bind.marshaller.CharacterEscapeHandler; public class PleaseWork { public void prettyPlease() throws JAXBException { Shiporder shipOrder = new Shiporder(); shipOrder.setOrderid("Order's ID"); shipOrder.setOrderperson("The woman said, \"How ya doin & stuff?\""); JAXBContext context = JAXBContext.newInstance("org.dc.generated"); Marshaller marshaller = context.createMarshaller(); marshaller.setProperty(Marshaller.JAXB_FORMATTED_OUTPUT, Boolean.TRUE); marshaller.setProperty(CharacterEscapeHandler.class.getName(), new CharacterEscapeHandler() { @Override public void escape(char[] ch, int start, int length, boolean isAttVal, Writer out) throws IOException { out.write("Called escape for characters = " + ch.toString()); } }); marshaller.marshal(shipOrder, System.out); } public static void main(String[] args) throws Exception { new PleaseWork().prettyPlease(); } } -- The output is this: <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <shiporder orderid="Order's ID"> <orderperson>The woman said, &quot;How ya doin &amp; stuff?&quot;</orderperson> </shiporder> and as you can see, the callback is never displayed. (Once I get the callback being called, I'll worry about having it actually do what I want.) --

    Read the article

  • Get specifc value from JSon string using JSon.Net

    - by dean nolan
    I am trying to get a value from a JSon formatted string. It was to get album info from a website called Freebase. My result is like this: { "a0": { "code": "/api/status/error", "messages": [ { "code": "/api/status/error/mql/result", "info": { "count": 20, "result": [ { "album": [ { "name": "Definitely Maybe", "release_date": "1994-08-30" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Most Wanted Rock 1", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Alternative 90s", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Live Forever: Best of Britpop", "release_date": "2003-03-03" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Best... In the World... Ever! Volume 5", "release_date": "1997-03-31" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Live 4 Ever", "release_date": "1998-06-29" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "De Afrekening, Volume 8", "release_date": "1994" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Now That's What I Call Music! 33", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Q: Anthems", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Best Anthems... Ever! Volume 2", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "1995 Mercury Music Prize: Ten Albums of the Year", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Now That's What I Call Music! 1994", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Indie Top 20, Volume 21", "release_date": "1995" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Dad Rocks!", "release_date": "2006-06-05" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Untitled", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "The Greatest Hits of 1994", "release_date": "1994-10" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Top of the Pops 2", "release_date": "2000-03-27" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Q: Anthems (disc 1)", "release_date": null } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Jamie Oliver's Cookin'", "release_date": "2001" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" }, { "album": [ { "name": "Killer Buzz", "release_date": "2001" } ], "artist": "Oasis", "name": "Live Forever", "type": "/music/track" } ] }, "message": "Unique query may have at most one result. Got 20", "path": "", "query": { "album": [ { "name": null, "release_date": null, "sort": "release_date" } ], "artist": "Oasis", "error_inside": ".", "name": "Live Forever", "type": "/music/track" } } ] }, "code": "/api/status/ok", "status": "200 OK", "transaction_id": "cache;cache04.p01.sjc1:8101;2010-03-30T18:04:20Z;0035" } I am looking to get the first album title, Definitely Maybe, from this list. I have tried parsing the string like this: JObject o = JObject.Parse(jsonString); string album = (string)o[""]; But no matter what I have tried I don't know what to put in those quotes. How would I get this specific value or be able to search for it? Thanks

    Read the article

  • availability of Win32_MountPoint and Win32_Volume on Windows XP?

    - by SteveC
    From the MSDN articles I've found -- http://msdn.microsoft.com/en-us/library/aa394515(v=VS.85).aspx -- Win32_Volume and Win32_MountPoint aren't available on Windows XP. However, I'm developing a C# app on Windows XP (64bit), and I can get to those WMI classes just fine. Users of my app will be on Windows XP sp2 with .Net 3.5 sp1. Googling around, I can't determine whether I can count on this or not. Am I successful on my system because of one or more of the following: - windows xp service pack 2? - visual studio 2008 sp1 was installed? - .Net 3.5 sp1? Should I use something other than WMI to get at the volume/mountpoint info? Below is sample code that's working... public static Dictionary<string, NameValueCollection> GetAllVolumeDeviceIDs() { Dictionary<string, NameValueCollection> ret = new Dictionary<string, NameValueCollection>(); // retrieve information from Win32_Volume try { using (ManagementClass volClass = new ManagementClass("Win32_Volume")) { using (ManagementObjectCollection mocVols = volClass.GetInstances()) { // iterate over every volume foreach (ManagementObject moVol in mocVols) { // get the volume's device ID (will be key into our dictionary) string devId = moVol.GetPropertyValue("DeviceID").ToString(); ret.Add(devId, new NameValueCollection()); //Console.WriteLine("Vol: {0}", devId); // for each non-null property on the Volume, add it to our NameValueCollection foreach (PropertyData p in moVol.Properties) { if (p.Value == null) continue; ret[devId].Add(p.Name, p.Value.ToString()); //Console.WriteLine("\t{0}: {1}", p.Name, p.Value); } // find the mountpoints of this volume using (ManagementObjectCollection mocMPs = moVol.GetRelationships("Win32_MountPoint")) { foreach (ManagementObject moMP in mocMPs) { // only care about adding directory // Directory prop will be something like "Win32_Directory.Name=\"C:\\\\\"" string dir = moMP["Directory"].ToString(); // find opening/closing quotes in order to get the substring we want int first = dir.IndexOf('"') + 1; int last = dir.LastIndexOf('"'); string dirSubstr = dir.Substring(first , last - first); // use GetFullPath to normalize/unescape any extra backslashes string fullpath = Path.GetFullPath(dirSubstr); ret[devId].Add(MOUNTPOINT_DIRS_KEY, fullpath); } } } } } } catch (Exception ex) { Console.WriteLine("Problem retrieving Volume information from WMI. {0} - \n{1}",ex.Message,ex.StackTrace); return ret; } return ret; }

    Read the article

  • XNA - Keyboard text input

    - by Sekhat
    Okay, so basically I want to be able to retrieve keyboard text. Like entering text into a text field or something. I'm only writing my game for windows. I've disregarded using Guide.BeginShowKeyboardInput because it breaks the feel of a self contained game, and the fact that the Guide always shows XBOX buttons doesn't seem right to me either. Yes it's the easiest way, but I don't like it. Next I tried using System.Windows.Forms.NativeWindow. I created a class that inherited from it, and passed it the Games window handle, implemented the WndProc function to catch WM_CHAR (or WM_KEYDOWN) though the WndProc got called for other messages, WM_CHAR and WM_KEYDOWN never did. So I had to abandon that idea, and besides, I was also referencing the whole of Windows forms, which meant unnecessary memory footprint bloat. So my last idea was to create a Thread level, low level keyboard hook. This has been the most successful so far. I get WM_KEYDOWN message, (not tried WM_CHAR yet) translate the virtual keycode with Win32 funcation MapVirtualKey to a char. And I get my text! (I'm just printing with Debug.Write at the moment) A couple problems though. It's as if I have caps lock on, and an unresponsive shift key. (Of course it's not however, it's just that there is only one Virtual Key Code per key, so translating it only has one output) and it adds overhead as it attaches itself to the Windows Hook List and isn't as fast as I'd like it to be, but the slowness could be more due to Debug.Write. Has anyone else approached this and solved it, without having to resort to an on screen keyboard? or does anyone have further ideas for me to try? thanks in advance. note: This is cross posted from the XNA Creators Forums, so if I get an answer there I'll post it here and Vice-Versa Question asked by Jimmy Maybe I'm not understanding the question, but why can't you use the XNA Keyboard and KeyboardState classes? My comment: It's because though you can read keystates, you can't get access to typed text as and how it is typed by the user. So let me further clarify. I want to implement being able to read text input from the user as if they are typing into textbox is windows. The keyboard and KeyboardState class get states of all keys, but I'd have to map each key and combination to it's character representation. This falls over when the user doesn't use the same keyboard language as I do especially with symbols (my double quotes is shift + 2, while american keyboards have theirs somewhere near the return key).

    Read the article

  • An alternative to Google Talk, AIM, MSN, et al [closed]

    - by mkaito
    I'm not entirely sure whether this part of stack exchange is the most adequate for my question, but it would seem to me that people sharing this kind of concern would converge either here, or possibly on a more unix-specific sub site. Either way, here goes. Background Feel free to skip to The Question, below. This should, however, help those interested understand where I'm coming from, and where I expect to get, messaging-wise. My online talking place-to-go has been IRC for the last fifteen years. I think it's a great protocol, and clients out there are very good. I still use, and will always continue to use IRC for most of my chat needs. But then, there is private instant messaging. While IRC can solve this with queries and DCC chats, the protocol just isn't meant to work too well on intermittent connections, such as a mobile device, where you can often walk around places with low signal. I used MSN for a while, but didn't like it. The concept was awesome, but I think Microsoft didn't get the implementation quite right. When they started adding all that eye candy, and my buddies started flooding me with custom icons and buzzing my screen to it's knees, I shut my account and told folks that missed me to just email or call me. Much whining happened, I got called many weird things for not using MSN, but folks eventually got over it. Next, Google Talk came along, and seemed to be a lot better than MSN ever was. The protocol was open, so I could use whatever client I felt a fancy for. With the advent of smart phones, I just got myself a gtalk client on the phone, and have had a really decent integrated mostly-universal IM solution. Over the last few months, all Google services have been feeling flaky. IMs will often arrive anywhere between twenty minutes and one hour after being sent, clients will randomly disconnect, client priorities seem to work sometimes, and sometimes just a random device of those connected will get an IM. I think the time has come to look for greener grass. The Question It's rather hard to put what I'm looking for into precise words. I guess I just want something that is kind of like MSN/Gtalk, but that doesn't let me down when I need it. IRC is pretty much perfect, but the protocol just isn't designed to work well on mobile devices. Really, at this point I'm considering sticking to IRC for desktop messaging, and SMS/email on the phone, but I hope that in this day and age there is something better out there.

    Read the article

  • In search of a network file system with extended caching to speed up file access

    - by Brecht Machiels
    I'm running a small home server that stores my documents. The disks in this server are in a RAID 1 configuration (using Linux md) and it's also periodically being backup up to an external hard drive to make sure I don't lose them. However, I'm always accessing the files from other computers on the home network using an SMB share, and this results in a considerable speed penalty (especially when connected over WLAN). This is quite annoying when editing large files, such as digital camera RAWs, for example. I've been looking for a solution to this problem. It would have to offer some kind of local caching to speed up the file access. The client would preferably not keep a copy of all data on the server, as it consists of a very large collection of photographs, most of which I will not access frequently. Instead, it should only cache the accessed files and sync the changes back in the background. Ideally, it would also do some smart read-ahead (cache the files that are in the same directory as the currently opened file, for examples), but I suppose that's asking a bit much. Synchronization should be automatic (on file change). Conflicting file changes (at the same time on different clients) are unlikely to happen in my use case, but I would prefer if they are handled properly (notification to the user). I've come across the following options, so far: something similar to Dropbox. iFolder seems to be the only thing that comes close, but its reputation (stability) and requirements put me off. A distributed file system such as OpenAFS. I'm not sure this will speed up file access. It is probably overkill for what I need. Maybe NFS or even Samba offer these possibilities. I read a bit about Windows' Offline Files, but its operation seems limited (at least on Windows XP). As this is just for personal use, I'm not willing to spend a lot of money. A free solution would be preferred. Also, the server needs to run on Linux, and I need a client for at least Windows.

    Read the article

  • Reconstructing Position in the Original Array from the Position in a Stripped Down Array

    - by aronchick
    I have a text file that contains a number of the following: <ID> <Time 1> --> <Time 2> <Quote (potentially multiple line> <New Line Separator> <ID> <Time 1> --> <Time 2> <Quote (potentially multiple line> <New Line Separator> <ID> <Time 1> --> <Time 2> <Quote (potentially multiple line> <New Line Separator> I have a very simple regex for stripping these out into a constant block so it's just: <Quote> <Quote> <Quote> What I'd like to do is present the quotes as a block to the user, and have them select it (using jQuery.fieldSelection) and then use the selected content to back out to the original array, so I can get timing and IDs. Because this has to go out to HTML, and the user has to be able to select the text on the screen, I can't do anything like hidden divs or hidden input fields. The only data I will have is the character range selected on screen. To be specific, this is what it looks like: 1 0:00 --> 0:05 He was bored. So bored. His great intellect, seemingly inexhaustible, was hungry for new challenges but he was the last of the great innovators 2 0:05 --> 0:10 - society's problems had all been solved. 3 0:11 --> 0:20 All seemingly unconnected disciplines had long since been found to be related in horrifically elusive and contrived ways and he had mastered them all. And this is what I'd like to present to the user for selection: He was bored. So bored. His great intellect, seemingly inexhaustible, was hungry for new challenges but he was the last of the great innovators - society's problems had all been solved. All seemingly unconnected disciplines had long since been found to be related in horrifically elusive and contrived ways and he had mastered them all. Has anyone com across something like this before? Any ideas how to take the selected text, or selection position, and go backwards to the original meta-data?

    Read the article

  • Block Google requests to 16k using pf firewall

    - by atmosx
    I'd like to block access to Google search using PF after the threshold of 17500 requests (connection established) in 24h, from a host running FreeBSD 9. What I came up with, after reading pf-faq is this rule: pass out on $net proto tcp from any to 'www.google.com' port www flags S/SA keep state (max-src-conn 200, max-src-conn-rate 17500/86400) NOTE: 86400 are 24h in seconds. The rule should work, but PF is smart enough to know that www.google.com resolves in 5 different IPs. So my pfctl -sr output gives me this: pass out on vte0 inet proto tcp from any to 173.194.44.81 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.82 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.83 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.80 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) pass out on vte0 inet proto tcp from any to 173.194.44.84 port = http flags S/SA keep state (source-track rule, max-src-conn 200, max-src-conn-rate 17500/86400, src.track 86400) PF creates 5 different rules, 1 for each IP that Google resolves. However I have the sense - without being 100% sure, I didn't had the chance to test it - that the number 17500/86400 applies for each IP. If that's the case - please confirm - then it's not what I want. In pf-faq there's another option called source-track-global: source-track This option enables the tracking of number of states created per source IP address. This option has two formats: + source-track rule - The maximum number of states created by this rule is limited by the rule's max-src-nodes and max-src-states options. Only state entries created by this particular rule count toward the rule's limits. + source-track global - The number of states created by all rules that use this option is limited. Each rule can specify different max-src-nodes and max-src-states options, however state entries created by any participating rule count towards each individual rule's limits. The total number of source IP addresses tracked globally can be controlled via the src-nodes runtime option. I tried to apply source-track-global in the above rule without success. How can I use this option in order to achieve my goal? Any thoughts or comments are more than welcome since I'm an amateur and don't fully understand PF yet. Thanks

    Read the article

  • Ubuntu Newbie Needs Assistance!!

    - by Steve Greene
    New Ubuntu User Needs Help!- version 9.10 does not communicate with laptop Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, even when sitting directly under the wireless router at the local library (literally), which puts out a wickedly-fast signal that my Windows Vista OS auto-detects and immediately connects to. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all, even though I am only a few feet from the router. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I desparately wish to convert to Ubuntu. I used Mac for ten years, and then Windows for ten years. Now, after 20 years, I want to live out my days as an open-source Ubuntu fanatic. I am ready to give the old status quo the boot! I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch so that I can move on in total Ubuntu bliss? My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System. I am running Windows Vista Home Basic, Service Pack 2. My current email is [email protected] if you have a workable solution that does not require programmer status to implement. Surely this must be a simple fix that I simply am overlooking, but being the new guy on the block, I have yet to be enlightened. Thanks for your help in coming up to speed!! Steve Wanna' be Ubuntu Fanatic "If you're not living on the edge, you're taking up too much space."

    Read the article

  • Troubleshooting major performance issue: Is culprit Intel RST, Hard drive, or something else?

    - by Sean Killeen
    The Setup I have the following components that come into play in this situation: ASUS P8Z68 V/PRO motherboard a RAID1 configuration (1x 1TB drive, 1 x 2TB drive -- I explain below), accelerated with an SSD using Intel's RST software, and 1 TB drive standing by as a spare. Core i7 2600k 32 GB RAM Windows 8.1 This box was designed to be beast, and until just recently, was very good at being just that. What's Happening The system has slowed to a crawl whenever it touches the disk. Things appear to work at normal speed when dealing with memory. For example, typing this is fine, but saving it to disk from notepad gave me a 5-7 second pause when clicking save. The disks appear to be at 100% all the time (e.g. the light on the disk access on the PC is solidly on -- not even any flashing) In ProcExp it appears that the disk is barely being utilized at all: Intel RST reports that everything is fine: Other Details Prior to this happening, RST had reported that my drives were failing (one went bad, one was throwing SMART events). This made sense; they were at the tail end of their warranty and the PC is on almost all the time. I RMA'd the drives via Seagate. In the meantime, I'd purchased a 2TB drive because I didn't realize that the 1TB drives were under warranty. I figured I'd replace the other 1 TB drive with another 2 TB when it died but then discovered the warranty. AFAIK, I haven't done any major updates since 8.1 and it worked fine after those. Question(s) How can I troubleshoot this? What is the best way to try to figure out why disks are being maxed out despite the OS reporting barely any disk usage and that everything is OK? Given the failures, etc. that I describe above, is it possible that the problem could be the I/O on the motherboard itself? If so, how would I even be able to diagnose it? I'm betting the drives that Seagate gave me are refurbished (didn't think to look; that's dumb). Is it possible that the same model drive, refurbished, could somehow cause this? In terms of how RAID1 works, is it possible that one drive is "falling behind" somehow, and that the RAID1 is constantly trying to fix the mirroring? If so, this seems like Intel RST would report on it, but I wanted to consider it as an option.

    Read the article

  • Why do I get extra, unexpected results with my ack regex?

    - by Gauthier
    I'm finally learning regexps and training with ack. I believe this uses Perl regexp. I want to match all lines where the first non-blank characters are if (<word> !, with any number of spaces in between the elements. This is what I came up with: ^[ \t]*if *\(\w+ *! It only nearly worked. ^[ \t]* is wrong, since it matches one or none [space or tab]. What I want is to match anything that may contain only space or tab (or nothing). For example these should not match: // if (asdf != 0) else if (asdf != 1) How can I modify my regexp for that? EDIT adding command line ack -i --group -a '^\s*if *\(\w+ *!' c:/work/proj/proj Note the single quotes, I'm not so sure about them anymore. My search base is a larger code base. It does include matching expressions (quite some), but even for example: 274: }else if (y != 0) , which I get as a result of the above command. EDIT adding the result of mobrule's test Mobrule, thanks for providing me a text to test on. I'll copy here what I get on my prompt: C:\Temp\regex>more ack.test # ack.test if (asdf != 0) # no spaces - ok if (asdf != 0) # single space - ok if (asdf != 0) # single tab - ok if (asdf != 0) # multiple space - ok if (asdf != 0) # multiple tab - ok if (asdf != 0) # spaces + tab ok if (asdf != 0) # tab + space ok if (asdf != 0) # space + tab + space ok // if (asdf != 0) # not ok } else if (asdf != 0) # not ok C:\Temp\regex>ack '^[ \t]*if *\(\w+ *!' ack.test C:\Temp\regex>"C:\Program\git\bin\perl.exe" C:\bat\ack.pl '[ \t]*if *\(\w+ *!' a ck.test if (asdf != 0) # no spaces - ok if (asdf != 0) # single space - ok if (asdf != 0) # single tab - ok if (asdf != 0) # multiple space - ok if (asdf != 0) # multiple tab - ok if (asdf != 0) # spaces + tab ok if (asdf != 0) # tab + space ok if (asdf != 0) # space + tab + space ok // if (asdf != 0) # not ok } else if (asdf != 0) # not ok The problem is in my call to my ack.bat! ack.bat contains: "C:\Program\git\bin\perl.exe" C:\bat\ack.pl %* Although I call with a caret, it gets away at the call of the bat file! Escaping the caret with ^^ does not work. Quoting the regex with " " instead of ' ' works. My problem was a DOS/win problem, sorry for bothering you all for that.

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >