Search Results

Search found 4382 results on 176 pages for 'priority queue'.

Page 148/176 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Zend hostname route doesn't match when it has child routes

    - by talisker
    I am implementing an Admin module, which contains the following routes: 'router' => array( 'routes' => array( 'admin' => array( 'type' => 'Zend\Mvc\Router\Http\Hostname', 'options' => array( 'route' => ':subdomain.mydomain.local', 'constraints' => array( 'subdomain' => 'admin', ), 'defaults' => array( 'module' => '__NAMESPACE__', 'controller' => 'Admin\Controller\Index', 'action' => 'index', ), ), 'priority' => 9000, 'may_terminate' => true, 'child_routes' => array( 'users' => array( 'type' => 'Zend\Mvc\Router\Http\Literal', 'options' => array( 'route' => '/users', 'defaults' => array( 'module' => '__NAMESPACE__', 'controller' => 'Admin\Controller\Users', 'action' => 'index', ), ), ), ) ), ), ), And this is the home route configuration: 'home' => array( 'type' => 'Zend\Mvc\Router\Http\Literal', 'options' => array( 'route' => '/', 'defaults' => array( 'controller' => 'Application\Controller\Index', 'action' => 'index', ), ), ), When I try to access to http://admin.mydomain.com, the route match always with the homeroute, but if I remove all the child routes from the admin route, the behavior is correct and a http://admin.mydomain.com matches with the adminroute. Any idea?

    Read the article

  • Make TBODY scrollable in Webkit browsers

    - by Andrew
    I'm aware of this question, but none of the answers work in Safari, Chrome, etc. The accepted strategy (as demonstrated here) is to set the tbody height and overflow properties like so: <table> <thead> <tr><th>This is the header and doesn't scroll</th></tr> </thead> <tbody style="height:100px; overflow:auto;"> <tr><td>content that scrolls</td></tr> <tr><td>content that scrolls</td></tr> <tr><td>content that scrolls</td></tr> <tr><td>content that scrolls</td></tr> <tr><td>content that scrolls</td></tr> <tr><td>content that scrolls</td></tr> <tr><td>content that scrolls</td></tr> </tbody> </table> Unfortunately, this does not work in any webkit browsers. There is a bug report about it that doesn't seem to be a high priority (reported June 05). So my question is: are there alternate strategies that do actually work? I've tried the two-table approach, but it's impossible to guarantee that the header will line up with the content. Do I just have to wait for Webkit to fix it?

    Read the article

  • What is wrong with locking non-static fields? What is the correct way to lock a particular instance?

    - by smartcaveman
    Why is it considered bad practice to lock non-static fields? And, if I am not locking non-static fields, then how do I lock an instance method without locking the method on all other instances of the same or derived class? I wrote an example to make my question more clear. public abstract class BaseClass { private readonly object NonStaticLockObject = new object(); private static readonly object StaticLockObject = new object(); protected void DoThreadSafeAction<T>(Action<T> action) where T: BaseClass { var derived = this as T; if(derived == null) { throw new Exception(); } lock(NonStaticLockObject) { action(derived); } } } public class DerivedClass :BaseClass { private readonly Queue<object> _queue; public void Enqueue(object obj) { DoThreadSafeAction<DerivedClass>(x=>x._queue.Enqueue(obj)); } } If I make the lock on the StaticLockObject, then the DoThreadSafeAction method will be locked for all instances of all classes that derive from BaseClass and that is not what I want. I want to make sure that no other threads can call a method on a particular instance of an object while it is locked.

    Read the article

  • Using JMX classes to notify on events over time

    - by Cincinnati Joe
    I've been looking at JMX for monitoring application and system metrics (partially because MBeans can accessed by various tools such as JConsole). It would seem like the classes included with JMX would be useful for things like notification when metrics have exceeded thresholds. But I'm not sure they fit with the way I want to measure these over a specified time period. For example, let's say I want to notify an admin when the average CPU load is over 95% for more than 5 minutes. Is that something can be done with a GaugeMonitor? From the docs, it doesn't seem quite suited for this, and I'm wondering if instead I should write my own MBean with the necessary logic. A more relevant example is when the login times for users exceed 10s over a period of 5 mins. Slightly different would be the last 20 logins took more than 10s on average. Another case would be when a process crashes 4+ times in an hour. Or the request queue exceeds 15 for 5 mins. Are the JMX Monitor classes useful for this kind of thing?

    Read the article

  • How to solve generic algebra using solver/library programmatically? Matlab, Mathematica, Wolfram etc?

    - by DevDevDev
    I'm trying to build an algebra trainer for students. I want to construct a representative problem, define constraints and relationships on the parameters, and then generate a bunch of Latex formatted problems from the representation. As an example: A specific question might be: If y < 0 and (x+3)(y-5) = 0, what is x? Answer (x = -3) I would like to encode this as a Latex formatted problem like. If $y<0$ and $(x+constant_1)(y+constant_2)=0$ what is the value of x? Answer = -constant_1 And plug into my problem solver constant_1 > 0, constant_1 < 60, constant_1 = INTEGER constant_2 < 0, constant_2 > -60, constant_2 = INTEGER Then it will randomly construct me pairs of (constant_1, constant_2) that I can feed into my Latex generator. Obviously this is an extremely simple example with no real "solving" but hopefully it gets the point across. Things I'm looking for ideally in priority order * Solve algebra problems * Definition of relationships relatively straight forward * Rich support for latex formatting (not just writing encoded strings) Thanks!

    Read the article

  • cross-platform development for mobile devices

    - by user924
    What language/framework is best worth learning for mobile application development? My specific situation is that I'm very familiar with Java and C++ (I especially love Qt), but have limited experience with other languages. Some options I'm considering: 1) Learn Objective-C and all the iPhone-specific tools I do have access to a mac. The downside here is I'm restricted to the iPhone, so I'd have to rewrite almost everything if I wanted to branch off into another mobile device (or move later to a cross-platform framework). Even after knowing Objective-C, it seems like other frameworks might be more efficient/faster to code in? 2) Use some existing cross-platform framework for development I've looked at rhomobile, but I only have limited experience with Ruby (and at first glance, it might be a little pricey comapred to other options). Appcelerator also looks popular and nice, but it uses html/css/javascript. Airplaysdk looks good, but it's new and I haven't been able to see much written about it (is it worth going for?). 3) Wait for something better to come along How far away is Qt for the iPhone? That would be ideal, but it isn't available now. So what do you recommend? Productivity/efficiency is my top priority, although learning a useful language for the long term would also be okay. Thanks

    Read the article

  • Ember Data Sycn - LocalStorage+REST+RealTime+Online/Offline

    - by Miguel Madero
    We have a combination of requirements in terms o data access. Pre-load some reference data. We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that. Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick) There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that. Lastly, there're some operations that should work off-line and changes should be synced later. To make it more concrete: * We pre-load vendor and "favorite products" into Local Storage. We work offline with those. * We need to sync server changes to vendor and product information. * If they search the full catalog, that requires them to be online. * When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection. So a few questions are derived from this: * Is there a way to user RESTAdapter in combination with LocalStorage? * Is there some Socket.IO support? (Happy to do this part manually) * Is there Queueing support? Ideally at the Ember-Data level. I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.

    Read the article

  • BUG - ProteaAudio with Lua does not work

    - by Stackfan
    Any idea why i cant use or cant build in Lua the ProTeaAudio ? 1) Exist [root@example ~]# yum install lua-devel Loaded plugins: presto, refresh-packagekit Setting up Install Process Package lua-devel-5.1.4-4.fc12.i686 already installed and latest version Nothing to do 2) get failed to build the RtAudio [sun@example proteaAudio_src_090204]$ make g++ -O2 -Wall -DHAVE_GETTIMEOFDAY -D__LINUX_ALSA__ -Irtaudio -Irtaudio/include -I../lua/src -I../archive/baseCode/include -c rtaudio/RtAudio.cpp -o rtaudio/RtAudio.o rtaudio/RtAudio.cpp:365: error: no ‘unsigned int RtApi::getStreamSampleRate()’ member function declared in class ‘RtApi’ rtaudio/RtAudio.cpp: In member function ‘virtual bool RtApiAlsa::probeDeviceOpen(unsigned int, RtApi::StreamMode, unsigned int, unsigned int, unsigned int, RtAudioFormat, unsigned int*, RtAudio::StreamOptions*)’: rtaudio/RtAudio.cpp:5835: error: ‘RTAUDIO_SCHEDULE_REALTIME’ was not declared in this scope rtaudio/RtAudio.cpp:5837: error: ‘struct RtAudio::StreamOptions’ has no member named ‘priority’ make: *** [rtaudio/RtAudio.o] Error 1 [sun@example proteaAudio_src_090204]$ Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio > require("proAudioRt"); stdin:1: module 'proAudioRt' not found: no field package.preload['proAudioRt'] no file './proAudioRt.lua' no file '/usr/share/lua/5.1/proAudioRt.lua' no file '/usr/share/lua/5.1/proAudioRt/init.lua' no file '/usr/lib/lua/5.1/proAudioRt.lua' no file '/usr/lib/lua/5.1/proAudioRt/init.lua' no file './proAudioRt.so' no file '/usr/lib/lua/5.1/proAudioRt.so' no file '/usr/lib/lua/5.1/loadall.so' stack traceback: [C]: in function 'require' stdin:1: in main chunk [C]: ?

    Read the article

  • How to filter node list based on the contents of another node list

    - by ~otakuj462
    Hi, I'd like to use XSLT to filter a node list based on the contents of another node list. Specifically, I'd like to filter a node list such that elements with identical id attributes are eliminated from the resulting node list. Priority should be given to one of the two node lists. The way I originally imagined implementing this was to do something like this: <xsl:variable name="filteredList1" select="$list1[not($list2[@id_from_list1 = @id_from_list2])]"/> The problem is that the context node changes in the predicate for $list2, so I don't have access to attribute @id_from_list1. Due to these scoping constraints, it's not clear to me how I would be able to refer to an attribute from the outer node list using nested predicates in this fashion. To get around the issue of the context node, I've tried to create a solution involving a for-each loop, like the following: <xsl:variable name="filteredList1"> <xsl:for-each select="$list1"> <xsl:variable name="id_from_list1" select="@id_from_list1"/> <xsl:if test="not($list2[@id_from_list2 = $id_from_list1])"> <xsl:copy-of select="."/> </xsl:if> </xsl:for-each> </xsl:variable> But this doesn't work correctly. It's also not clear to me how it fails... Using the above technique, filteredList1 has a length of 1, but appears to be empty. It's strange behaviour, and anyhow, I feel there must be a more elegant approach. I'd appreciate any guidance anyone can offer. Thanks.

    Read the article

  • Need Help finding an appropriate task assignment algorithm for a college project involving coordinat

    - by Trif Mircea
    I am a long time lurker here and have found over time many answers regarding jquery and web development topics so I decided to ask a question of my own. This time I have to create a c++ project for college which should help manage the workflow of a company providing all kinds of services through in the field teams. The ideas I have so far are: client-server application; the server is a dispatcher where all the orders from clients get and the clients are mobile devices (PDAs) each team in the field having one a order from a client is a task. Each task is made up of a series of subtasks. You have a database with estimations on how long a task should take to complete you also know what tasks or subtasks each team on the field can perform based on what kind of specialists made up the team (not going to complicate the problem by adding needed materials, it is considered that if a member of a team can perform a subtask he has the stuff needed) Now knowing these factors, what would a good task assignment algorithm be? The criteria is: how many tasks can a team do, how many tasks they have in the queue, it could also be location, how far away are they from the place but I don't think I can implement that.. It needs to be efficient and also to adapt quickly is the human dispatcher manually assigns a task. Any help or leads would be really appreciated. Also I'm not 100% sure in the idea so if you have another way you would go about creating such an application please share, even if it just a quick outline. I have to write a theoretical part too so even if the ideas are far more complex that what i outlined that would be ok ; I'd write those and implement what I can. Edit: C++ is the only language I know unfortunately.

    Read the article

  • ActiveMq integration with Spring 2.5

    - by Tony
    I am using ActiveMq 5.32 with Spring 2.5.5. I use pretty generic configuration, as long as I include the jmsTransactionManager in DefaultMessageListenerContainer, Spring throw an error on start up: "Error creating bean with name 'org.springframework.jms.listener.DefaultMessageListenerContainer#0'" Without the transactionManager attribute , this works fine, but when I add 10 messages to the message queue, a transaction exception will occur. Part of my configurations : <bean class="org.springframework.jms.listener.DefaultMessageListenerContainer"> <property name="connectionFactory" ref="connectionFactory" /> <property name="destination" ref="emailDestination" /> <property name="messageListener" ref="emailServiceMDP" /> <property name="transactionManager" ref="jmsTransactionManager" /> </bean> <bean id="jmsTransactionManager" class="org.springframework.jms.connection.JmsTransactionManager"> <property name="connectionFactory" ref="connectionFactory" /> </bean> Does this version of Spring and Activemq has some know issues in integration ? Or do I need additional libs to get jmsTransactionManager to work ?

    Read the article

  • MapView EXC_BAD_ACCESS (SIGSEGV) and KERN_INVALID_ADDRESS

    - by user768113
    I'm having some 'issues' with my application... well, it crashes in an UIViewController that is presented modally, there the user enters information through UITextFields and his location is tracked by a MapView. Lets call this view controller "MapViewController" When the user submits the form, I call a different ViewController - modally again - that processes this info and a third one answers accordingly, then go back to a MenuVC using unwinding segues, which then calls MapViewController and so on. This sequence is repeated many times, but it always crashes in MapViewController. Looking at the crash log, I think that the MapView can be the problem of this or some element in the UI (because of the UIKit framework). I tried to use NSZombie in order to track a memory issue but it doesn't give me a clue about whats happening. Here is the crash log Hardware Model: iPad3,4 Process: MyApp [2253] OS Version: iOS 6.1.3 (10B329) Report Version: 104 Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Codes: KERN_INVALID_ADDRESS at 0x00000044 Crashed Thread: 0 Thread 0 name: Dispatch queue: com.apple.main-thread Thread 0 Crashed: 0 IMGSGX554GLDriver 0x328b9be0 0x328ac000 + 56288 1 IMGSGX554GLDriver 0x328b9b8e 0x328ac000 + 56206deallocated instance 2 IMGSGX554GLDriver 0x328bc2f2 0x328ac000 + 66290 3 IMGSGX554GLDriver 0x328baf44 0x328ac000 + 61252 4 libGPUSupportMercury.dylib 0x370f86be 0x370f6000 + 9918 5 GLEngine 0x34ce8bd2 0x34c4f000 + 629714 6 GLEngine 0x34cea30e 0x34c4f000 + 635662 7 GLEngine 0x34c8498e 0x34c4f000 + 219534 8 GLEngine 0x34c81394 0x34c4f000 + 205716 9 VectorKit 0x3957f4de 0x394c7000 + 754910 10 VectorKit 0x3955552e 0x394c7000 + 582958 11 VectorKit 0x394d056e 0x394c7000 + 38254 12 VectorKit 0x394d0416 0x394c7000 + 37910 13 VectorKit 0x394cb7ca 0x394c7000 + 18378 14 VectorKit 0x394c9804 0x394c7000 + 10244 15 VectorKit 0x394c86a2 0x394c7000 + 5794 16 QuartzCore 0x354a07a4 0x35466000 + 239524 17 QuartzCore 0x354a06fc 0x35466000 + 239356 18 IOMobileFramebuffer 0x376f8fd4 0x376f4000 + 20436 19 IOKit 0x344935aa 0x34490000 + 13738 20 CoreFoundation 0x33875888 0x337e9000 + 575624 21 CoreFoundation 0x338803e4 0x337e9000 + 619492 22 CoreFoundation 0x33880386 0x337e9000 + 619398 23 CoreFoundation 0x3387f20a 0x337e9000 + 614922 24 CoreFoundation 0x337f2238 0x337e9000 + 37432 25 CoreFoundation 0x337f20c4 0x337e9000 + 37060 26 GraphicsServices 0x373ad336 0x373a8000 + 21302 27 UIKit 0x3570e2b4 0x356b7000 + 357044 28 MyApp 0x000ea12e 0xe9000 + 4398 29 MyApp 0x000ea0e4 0xe9000 + 4324 I think thats all, additionally, I would like to ask you: if you are using unwind segues then you are releasing view controllers from the memory heap, right? Meanwhile, performing segues let you instantiate those controllers. Technically, MenuVC should be the only VC alive in the heap during the app life cycle if you understand me.

    Read the article

  • Table index design

    - by Swoosh
    I would like to add index(s) to my table. I am looking for general ideas how to add more indexes to a table. Other than the PK clustered. I would like to know what to look for when I am doing this. So, my example: This table (let's call it TASK table) is going to be the biggest table of the whole application. Expecting millions records. IMPORTANT: massive bulk-insert is adding data in this table table has 27 columns: (so far, and counting :D ) int x 9 columns = id-s varchar x 10 columns bit x 2 columns datetime x 5 columns INT COLUMNS all of these are INT ID-s but from tables that are usually smaller than Task table (10-50 records max), example: Status table (with values like "open", "closed") or Priority table (with values like "important", "not so important", "normal") there is also a column like "parent-ID" (self - ID) join: all the "small" tables have PK, the usual way ... clustered STRING COLUMNS there is a (Company) column (string!) that is something like "5 characters long all the time" and every user will be restricted using this one. If in Task there are 15 different "Companies" the logged in user would only see one. So there's always a filter on this one. Might be a good idea to add an index to this column? DATE COLUMNS I think they don't index these ... right? Or can / should be?

    Read the article

  • How to process events chain.

    - by theblackcascade
    I need to process this chain using one LoadXML method and one urlLoader object: ResourceLoader.Instance.LoadXML("Config.xml"); ResourceLoader.Instance.LoadXML("GraphicsSet.xml"); Loader starts loading after first frameFunc iteration (why?) I want it to start immediatly.(optional) And it starts loading only "GraphicsSet.xml" Loader class LoadXml method: public function LoadXML(URL:String):XML { urlLoader.addEventListener(Event.COMPLETE,XmlLoadCompleteListener); urlLoader.load(new URLRequest(URL)); return xml; } private function XmlLoadCompleteListener(e:Event):void { var xml:XML = new XML(e.target.data); trace(xml); trace(xml.name()); if(xml.name() == "Config") XMLParser.Instance.GameSetup(xml); else if(xml.name() == "GraphicsSet") XMLParser.Instance.GraphicsPoolSetup(xml); } Here is main: public function Main() { Mouse.hide(); this.addChild(Game.Instance); this.addEventListener(Event.ENTER_FRAME,Game.Instance.Loop); } And on adding a Game.Instance to the rendering queue in game constuctor i start initialize method: public function Game():void { trace("Constructor"); if(_instance) throw new Error("Use Instance Field"); Initialize(); } its code is: private function Initialize():void { trace("initialization"); ResourceLoader.Instance.LoadXML("Config.xml"); ResourceLoader.Instance.LoadXML("GraphicsSet.xml"); } Thanks.

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • Standard term for a thread I/O reorder buffer?

    - by Crashworks
    I have a case where many threads all concurrently generate data that is ultimately written to one long, serial file. I need to somehow serialize these writes so that the file gets written in the right order. ie, I have an input queue of 2048 jobs j0..jn, each of which produces a chunk of data oi. The jobs run in parallel on, say, eight threads, but the output blocks have to appear in the file in the same order as the corresponding input blocks — the output file has to be in the order o0o1o2... The solution to this is pretty self evident: I need some kind of buffer that accumulates and writes the output blocks in the correct order, similar to a CPU reorder buffer in Tomasulo's algorithm, or to the way that TCP reassembles out-of-order packets before passing them to the application layer. Before I go code it, I'd like to do a quick literature search to see if there are any papers that have solved this problem in a particularly clever or efficient way, since I have severe realtime and memory constraints. I can't seem to find any papers describing this though; a Scholar search on every permutation of [threads, concurrent, reorder buffer, reassembly, io, serialize] hasn't yielded anything useful. I feel like I must just not be searching the right terms. Is there a common academic name or keyword for this kind of pattern that I can search on?

    Read the article

  • Scheduling a Delay Job on Heroku with a Worker Dyno

    - by user1524775
    I'm currently using Heroku's scheduler to run a script. However, the time that the script takes to run is going to increase from a few milliseconds to a few minutes. I'm looking at using the delayed_job gem to push this process off to a Worker Dyno. I want to continue to run this script once-a-day, just offload it to the worker. My current rake task is: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.process puts "... done." end Once the gem is installed, migration run, and worker dyno started, will the script just need to change to: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.delay.process puts "... done." end With this task still being a rake task scheduled by Heroku's Scheduler, is the only thing that needs to happen here the introduction of the delay method to put this in the Worker's queue? Thanks in advance for any help.

    Read the article

  • Communication between lexer and parser

    - by FredOverflow
    Every time I write a simple lexer and parser, I stumble upon the same question: how should the lexer and the parser communicate? I see four different approaches: The lexer eagerly converts the entire input string into a vector of tokens. Once this is done, the vector is fed to the parser which converts it into a tree. This is by far the simplest solution to implement, but since all tokens are stored in memory, it wastes a lot of space. Each time the lexer finds a token, it invokes a function on the parser, passing the current token. In my experience, this only works if the parser can naturally be implemented as a state machine like LALR parsers. By contrast, I don't think it would work at all for recursive descent parsers. Each time the parser needs a token, it asks the lexer for the next one. This is very easy to implement in C# due to the yield keyword, but quite hard in C++ which doesn't have it. The lexer and parser communicate through an asynchronous queue. This is commonly known under the title "producer/consumer", and it should simplify the communication between the lexer and the parser a lot. Does it also outperform the other solutions on multicores? Or is lexing too trivial? Is my analysis sound? Are there other approaches I haven't thought of? What is used in real-world compilers? It would be really cool if compiler writers like Eric Lippert could shed some light on this issue.

    Read the article

  • How do I become better in math, after being a programmer for several years.

    - by loxs
    I've had quite a weird career till now. First I graduated from a medical school. Then I went into marketing (pharmaceuticals). And then umm, after some time, I decided to go for my (till then) hobby and became a "professional" programmer. I've been quite successful at this ever since. I have quite some languages "under my belt". I earn not bad and I have been involved in the opensource community quite heavily. The thing is that I suck at math :). Well, not totally of course, as I get my work done. But I don't know how much I suck. And I don't know how to find out. Math has never really been of any priority during my middle/high school years. I only picked as little as I could afford, because I was always getting ready to go for Medicine. Of course I know the basics of algebra. Things like "normal" and square equations. Also the basics of geometry. But well, there are things that I have missed. And lately I am being fascinated by things like probability theory, infinity, chaos/order etc. But every time I try to learn something about these topics, I hit a wall of terminology, special symbols, and some special kind of thinking, that is quite like mine (a programmer), but also a lot different (and appears weird to me). So, what kinds of books would you recommend me? It's very hard to find something suitable. All that I find are either too easy (and boring) or totally impenetrable.

    Read the article

  • Query returning related assets

    - by GMo
    I have 2 tables, one is an assets table which holds digital assets (e.g. article, images etc), the 2nd table is an asset_links table which maps 1-1 relationships between assets contained within the assets table. Here are the table definitions: Asset +---------------+--------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +---------------+--------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | source | varchar(255) | YES | | NULL | | | title | varchar(255) | YES | | NULL | | | date_created | datetime | YES | | NULL | | | date_embargo | datetime | YES | | NULL | | | date_expires | datetime | YES | | NULL | | | date_updated | datetime | YES | | NULL | | | keywords | varchar(255) | YES | | NULL | | | status | int(11) | YES | | NULL | | | priority | int(11) | YES | | NULL | | | fk_site | int(11) | YES | MUL | NULL | | | resource_type | varchar(255) | YES | | NULL | | | resource_id | int(11) | YES | | NULL | | | fk_user | int(11) | YES | MUL | NULL | | +---------------+--------------+------+-----+---------+----------------+ Asset_links +-----------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | asset_id1 | int(11) | YES | | NULL | | | asset_id2 | int(11) | YES | | NULL | | +-----------+---------+------+-----+---------+----------------+ In the asset_links table there are the following rows: 1 - 3, 1 - 4, 2 - 10, 2 - 56 I am looking to write one query which will return all assets which satisfy any asset search criteria and within the same query return all of the linked asset data for linked assets for that asset. e.g. The query returning assets 1 and 2 would return : Asset 1 attributes - Asset 3 attributes - Asset 4 attributes Asset 2 attributes - Asset 10 attributes - Asset 56 attributes What is the best way to write the query?

    Read the article

  • Error while sending mail (attachment file)

    - by Surya sasidhar
    hi, in my application i am using to send mail with attachments i write the code like this Using System.Net.Mail; MailMessage mail = new MailMessage(); mail.Body = "<html><body><b> Name Of The Job Seeker: " + txtName.Text + "<br><br>" + "The Mail ID:" + txtEmail.Text + "<br><br>" + " The Mobile Number: " + txtmobile.Text + "<br><br>" + "Position For Applied: " + txtPostionAppl.Text + "<br><br>" + "Description " + txtdescript.Text + "<br><br></b></body></html>"; mail.From = new MailAddress ( txtEmail.Text); mail.To .Add (new MailAddress ( mailid)); mail.Priority = MailPriority.High; FileUpload1.PostedFile.SaveAs("~/Resume/" + FileUpload1.FileName); mail.Attachments.Add(filenme); SmtpMail sm = new SmtpMail(); sm.Send(mail); it is giving error at attachment like mail.Attachemts.Add(filena) like this 'System.Collections.ObjectModel.Collection.Add(System.Net.Mail.Attachment)' has some invalid arguments.

    Read the article

  • SQL Server race condition issue with range lock

    - by Freek
    I'm implementing a queue in SQL Server (please no discussions about this) and am running into a race condition issue. The T-SQL of interest is the following: set transaction isolation level serializable begin tran declare @RecordId int declare @CurrentTS datetime2 set @CurrentTS=CURRENT_TIMESTAMP select top 1 @RecordId=Id from QueuedImportJobs with (updlock) where Status=@Status and (LeaseTimeout is null or @CurrentTS>LeaseTimeout) order by Id asc if @@ROWCOUNT> 0 begin update QueuedImportJobs set LeaseTimeout = DATEADD(mi,5,@CurrentTS), LeaseTicket=newid() where Id=@RecordId select * from QueuedImportJobs where Id = @RecordId end commit tran RecordId is the PK and there is also an index on Status,LeaseTimeout. What I'm basically doing is select a record of which the lease happens to be expired, while simultaneously updating the lease time with 5 minutes and setting a new lease ticket. So the problem is that I'm getting deadlocks when I run this code in parallel using a couple of threads. I've debugged it up to the point where I found out that the update statement sometimes gets executing twice for the same record. Now, I was under the impression that the with (updlock) should prevent this (it also happens with xlock btw, not with tablockx). So it actually look like there is a RangeS-U and a RangeX-X lock on the same range of records, which ought to be impossible. So what am I missing? I'm thinking it might have something to do with the top 1 clause or that SQL Server does not know that where Id=@RecordId is actually in the locked range? Deadlock graph: Table schema (simplified):

    Read the article

  • XMLHttpRequest POST Data Size

    - by usurper
    Hi, Is there a size limit to a XHR POST request? I am using the POST method for saving textdata into MySQL using PHP script and the data is cut off. Firebug sends me the following message: ... Firebug request size limit has been reached by Firebug. ... This is my code for sending the data: function makeXHR(recordData) { xmlhttp = createXHR(); var body = "q=" + encodeURIComponent(recordData); xmlhttp.open("POST", "insertRowData.php", true); xmlhttp.setRequestHeader("Content-Type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", body.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.onreadystatechange = function() { if(xmlhttp.readyState == 4 && xmlhttp.status == 200) { //alert(xmlhttp.responseText); alert("Records were saved successfully!"); } } xmlhttp.send(body); } The only solution I can think of is splitting the data and making a queue of XHR requests but I don't like it. Is there another way?

    Read the article

  • Wicket application + Apache + mod_jk - AJP queues are filling up!

    - by nojyarg
    Dear community, We are having a Wicket-based Java application deployed in a production server cluster using Apache (2.2.3) with mod_jk (1.2.30) as load balancing component w/ sticky session and Jboss 5 as application container for the Java application. We are inconsistently seeing an issue in our production environment where our AJP queues between Apache and Jboss as shown in the JMX console fill up with requests to the point where the application server is no longer taking on any new requests. When looking at all involved system components (overall traffic, load db, process list db, load of all clustered application server nodes) nothing points towards a capacity issue which would explain why the calls are being stalled in the AJP queue. Instead all systems appear sufficiently idle. So far, our only remedy to this issue is to restart the appservers and the load balancer which only occasionally clears the AJP queues. We are trying to figure out why the queues are filling up to the point that no calls get returned to the end user although the system is not under a high load. Has anyone else experienced similar problems? Are there any other system metrics we should monitor that could explain the queuing behavior? Is this potentially a mod_jk issue? If so, is it advisable to swap mod_jk with mod_cluster to resolve the issue? Any advice is highly appreciated. If I can provide additional information for the sake of troubleshooting I would be more than willing to do so. /Ben

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >