Search Results

Search found 6805 results on 273 pages for 'fast formula'.

Page 226/273 | < Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >

  • Hidden controls, iframes or divs

    - by user287745
    What happens to the controls or the iframe or the div, which are hidden? Do they get transferred to the user side? Disabled: does it get transferred to the user side? What I want is, an aspx page will be having many iframes to display different pages. There will be many div tags to display CSS formatted information. To understand what I mean by many:- I have to transfer a complete website with 30 aspx pages into one single page! I have simply combined everything resulting in one extremely huge page. My concern is that on local host it loads fast, but when on online server accessed by numerous people for education purposes, the site (ONE PAGE) WILL SLOW DOWN terribly. To overcome this I thought of using hidden and disable options. What is an improved way of achieving the above? Yes, it sounds silly but this is the requirement. Edit: Yes, I know id and server tag must be set, but what I am asking will the div tag be sent to the user's browser? One answer is no. So can I enable them using JavaScript? Like document.getElementById(id).style.visibility="visible" What if I disable them, and from coding of JavaScript enable them? Will they be loaded at the time of enabling?

    Read the article

  • CSS Parser - Insert mtimes

    - by brad
    What command line tool can I use to automatically insert mtimes into urls in my css files for the purposes of breaking the cache? /* before */ .example { background: url(example.jpg); } /* after */ .example { background: url(example.jpg?1271298451); } Also, I would like this tool to spit out the latest mtime as the css files mtime. (If the css file is still cached then the new urls will not get to the client.) In searching the web, I have found very few tools that can do this. I am even considering rolling my own, but have found very little in the way of css parsers that are actively maintained. A candidate should be: fast (I don't want to wait 30 seconds on deployment) command line accessible (something like "cat foo.css bar.css | cssmtime out.css") What I've found so Far yui compressor - initially I thought I would extend the yui compressor to do this, but found that it is implemented as a bunch of regex's and not a parser. csstidy - last release was in 2007 and development has been suspended, but does have an option for inserting mtimes (also written in php, something I have no experience in) cssutils - python sac implementation - seems to be actively maintained, but also seems like overkill for my needs. Also, written in python which I have experience with csspool - ruby sac implementation - I don't know much ruby, but would like to learn other sac implementations - There are several java implementations, and a c implementation neither of which I know much about What's your experience? Have you used any of these libraries? Was the experience positive? Would you recommend I go with them for my purposes?

    Read the article

  • My OpenCL kernel is slower on faster hardware.. But why?

    - by matdumsa
    Hi folks, As I was finishing coding my project for a multicore programming class I came up upon something really weird I wanted to discuss with you. We were asked to create any program that would show significant improvement in being programmed for a multi-core platform. I’ve decided to try and code something on the GPU to try out OpenCL. I’ve chosen the matrix convolution problem since I’m quite familiar with it (I’ve parallelized it before with open_mpi with great speedup for large images). So here it is, I select a large GIF file (2.5 MB) [2816X2112] and I run the sequential version (original code) and I get an average of 15.3 seconds. I then run the new OpenCL version I just wrote on my MBP integrated GeForce 9400M and I get timings of 1.26s in average.. So far so good, it’s a speedup of 12X!! But now I go in my energy saver panel to turn on the “Graphic Performance Mode” That mode turns off the GeForce 9400M and turns on the Geforce 9600M GT my system has. Apple says this card is twice as fast as the integrated one. Guess what, my timing using the kick-ass graphic card are 3.2 seconds in average… My 9600M GT seems to be more than two times slower than the 9400M.. For those of you that are OpenCL inclined, I copy all data to remote buffers before starting, so the actual computation doesn’t require roundtrip to main ram. Also, I let OpenCL determine the optimal local-worksize as I’ve read they’ve done a pretty good implementation at figuring that parameter out.. Anyone has a clue? edit: full source code with makefiles here http://www.mathieusavard.info/convolution.zip cd gimage make cd ../clconvolute make put a large input.gif in clconvolute and run it to see results

    Read the article

  • Big-O of PHP functions?

    - by Kendall Hopkins
    After using PHP for a while now, I've noticed that not all PHP built in functions as fast as expected. Consider the below two possible implementations of a function that finds if a number is prime using a cached array of primes. //very slow for large $prime_array $prime_array = array( 2, 3, 5, 7, 11, 13, .... 104729, ... ); $result_array = array(); foreach( $array_of_number => $number ) { $result_array[$number] = in_array( $number, $large_prime_array ); } //still decent performance for large $prime_array $prime_array => array( 2 => NULL, 3 => NULL, 5 => NULL, 7 => NULL, 11 => NULL, 13 => NULL, .... 104729 => NULL, ... ); foreach( $array_of_number => $number ) { $result_array[$number] = array_key_exists( $number, $large_prime_array ); } This is because in_array is implemented with a linear search O(n) which will linearly slow down as $prime_array grows. Where the array_key_exists function is implemented with a hash lookup O(1) which will not slow down unless the hash table gets extremely populated (in which case it's only O(logn)). So far I've had to discover the big-O's via trial and error, and occasionally looking at the source code. Now for the question... I was wondering if there was a list of the theoretical (or practical) big O times for all* the PHP built in functions. *or at least the interesting ones For example find it very hard to predict what the big O of functions listed because the possible implementation depends on unknown core data structures of PHP: array_merge, array_merge_recursive, array_reverse, array_intersect, array_combine, str_replace (with array inputs), etc.

    Read the article

  • First-chance exception at std::set dectructor

    - by bartek
    Hi, I have a strange exception at my class destructor: First-chance exception reading location 0x00000 class DispLst{ // For fast instance existance test std::set< std::string > instances; [...] DispLst::~DispLst(){ this->clean(); DeleteCriticalSection( &instancesGuard ); } <---- here instances destructor raises exception Call stack: X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::begin() Line 556 + 0xc bytes C++ X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::_Tidy() Line 1421 + 0x64 bytes C++ X.exe!std::_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 ::~_Tree,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ,0 () Line 541 C++ X.exe!std::set,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator ::~set,std::allocator ,std::less,std::allocator ,std::allocator,std::allocator () + 0x2b bytes C++ X.exe!DispLst::~DispLst() Line 82 + 0xf bytes C++ The exact place of error in xtree: void _Tidy() { // free all storage erase(begin(), end()); <------------------- HERE this->_Alptr.destroy(&_Left(_Myhead)); this->_Alptr.destroy(&_Parent(_Myhead)); this->_Alptr.destroy(&_Right(_Myhead)); this->_Alnod.deallocate(_Myhead, 1); _Myhead = 0, _Mysize = 0; } iterator begin() { // return iterator for beginning of mutable sequence return (_TREE_ITERATOR(_Lmost())); <---------------- HERE } What is going on ? I'm using Visual Studio 2008.

    Read the article

  • Good hash function for a 2d index

    - by rlbond
    I have a struct called Point. Point is pretty simple: struct Point { Row row; Column column; // some other code for addition and subtraction of points is there too } Row and Column are basically glorified ints, but I got sick of accidentally transposing the input arguments to functions and gave them each a wrapper class. Right now I use a set of points, but repeated lookups are really slowing things down. I want to switch to an unordered_set. So, I want to have an unordered_set of Points. Typically this set might contain, for example, every point on a 80x24 terminal = 1920 points. I need a good hash function. I just came up with the following: struct PointHash : public std::unary_function<Point, std::size_t> { result_type operator()(const argument_type& val) const { return val.row.value() * 1000 + val.col.value(); } }; However, I'm not sure that this is really a good hash function. I wanted something fast, since I need to do many lookups very quickly. Is there a better hash function I can use, or is this OK?

    Read the article

  • Read line and change the line that not consist of certain words and not end with dot

    - by igo
    I wanna read some text files in a folder line by line. for example of 1 txt : Fast and Effective Text Mining Using Linear-time Document Clustering Bjornar Larsen WORD2 Chinatsu Aone SRA International AK, Inc. 4300 Fair Lakes Cow-l Fairfax, VA 22033 {bjornar-larsen, WORD1 I wanna remove line that does not contain of words = word, word2, word3, and does not end with dot . so. from the example, the result will be : Bjornar Larsen WORD2 Chinatsu Aone SRA International, Inc. {bjornar-larsen, WORD1 I am confused, hw to remove the line? it that possible? or can we replace them with a space? here's the code : $url = glob($savePath.'*.txt'); foreach ($url as $file => $files) { $handle = fopen($files, "r") or die ('can not open file'); $ori_content= file_get_contents($files); foreach(preg_split("/((\r?\n)|(\r\n?))/", $ori_content) as $buffer){ $pos1 = stripos($buffer, $word1); $pos2 = stripos($buffer, $word2); $pos3 = stripos($buffer, $word3); $last = $str[strlen($buffer)-1];//read the las character if (true !== $pos1 OR true !== $pos2 OR true !==$pos3 && $last != '.'){ //how to remove } } } please help me, thank you so much :)

    Read the article

  • Javascript scope chain

    - by Geromey
    Hi, I am trying to optimize my program. I think I understand the basics of closure. I am confused about the scope chain though. I know that in general you want a low scope (to access variables quickly). Say I have the following object: var my_object = (function(){ //private variables var a_private = 0; return{ //public //public variables a_public : 1, //public methods some_public : function(){ debugger; alert(this.a_public); alert(a_private); }; }; })(); My understanding is that if I am in the some_public method I can access the private variables faster than the public ones. Is this correct? My confusion comes with the scope level of this. When the code is stopped at debugger, firebug shows the public variable inside the this keyword. The this word is not inside a scope level. How fast is accessing this? Right now I am storing any this.properties as another local variable to avoid accessing it multiple times. Thanks very much!

    Read the article

  • What is the correct way to implement a massive hierarchical, geographical search for news?

    - by Philip Brocoum
    The company I work for is in the business of sending press releases. We want to make it possible for interested parties to search for press releases based on a number of criteria, the most important being location. For example, someone might search for all news sent to New York City, Massachusetts, or ZIP code 89134, sent from a governmental institution, under the topic of "traffic". Or whatever. The problem is, we've sent, literally, hundreds of thousands of press releases. Searching is slow and complex. For example, a press release sent to Queens, NY should show up in the search I mentioned above even though it wasn't specifically sent to New York City, because Queens is a subset of New York City. We may also want to implement "and" and "or" and negation and text search to the query to create complex searches. These searches also have to be fast enough to function as dynamic RSS feeds. I really don't know anything about search theory, or how it's properly done. The way we are getting by right now is using a data mart to store the locations the releases were sent to in a single table. However, because of the subset thing mentioned above, the data mart is gigantic with millions of rows. And we haven't even implemented cities yet, and there are about 50,000 cities in the United States, which will exponentially increase the size of the data mart by so much I'm afraid it just won't work anymore. Anyway, I realize this is not a simple question and there won't be a "do this" answer. However, I'm hoping one of you can point me in the right direction where I can learn about how massive searches are done? Because I really know nothing about it. And such a search engine is turning out to be incredibly difficult to make. Thanks! I know there must be a way because if Google can search the entire internet we must be able to search our own database :-)

    Read the article

  • Can you force a crash if a write occurs to a given memory location with finer than page granularity?

    - by Joseph Garvin
    I'm writing a program that for performance reasons uses shared memory (alternatives have been evaluated, and they are not fast enough for my task, so suggestions to not use it will be downvoted). In the shared memory region I am writing many structs of a fixed size. There is one program responsible for writing the structs into shared memory, and many clients that read from it. However, there is one member of each struct that clients need to write to (a reference count, which they will update atomically). All of the other members should be read only to the clients. Because clients need to change that one member, they can't map the shared memory region as read only. But they shouldn't be tinkering with the other members either, and since these programs are written in C++, memory corruption is possible. Ideally, it should be as difficult as possible for one client to crash another. I'm only worried about buggy clients, not malicious ones, so imperfect solutions are allowed. I can try to stop clients from overwriting by declaring the members in the header they use as const, but that won't prevent memory corruption (buffer overflows, bad casts, etc.) from overwriting. I can insert canaries, but then I have to constantly pay the cost of checking them. Instead of storing the reference count member directly, I could store a pointer to the actual data in a separate mapped write only page, while keeping the structs in read only mapped pages. This will work, the OS will force my application to crash if I try to write to the pointed to data, but indirect storage can be undesirable when trying to write lock free algorithms, because needing to follow another level of indirection can change whether something can be done atomically. Is there any way to mark smaller areas of memory such that writing them will cause your app to blow up? Some platforms have hardware watchpoints, and maybe I could activate one of those with inline assembly, but I'd be limited to only 4 at a time on 32-bit x86 and each one could only cover part of the struct because they're limited to 4 bytes. It'd also make my program painful to debug ;)

    Read the article

  • IPhone sdk, more accurate collision detection and set the frame/bounds of UIImageView...

    - by Harry
    Hey, Im having a big problem with my app at the moment, its all too inaccurate. I have a image of a ballooon, https://dl.dropbox.com/u/2578642/Balloonedit.png And i have a dart which if it collides into the balloon the game ends. At the moment i am populating the image of the balloon with 8 UIImageViews. and i am detecting if the dart hits them, this was suppose to make it really accurate but its not, the dart pretty much passes through the balloon when its meant to collide, so i have a plan, is there any way to detect when the dart hits the actual image of the balloon not the UIImageView, or is there any way to draw a border around the balloon and detect if it hits that? currently i am using this code to detect the collision: if (CGRectIntersectsRect(pinend.frame, balloonbit1.frame)){ [maintimer invalidate]; accelManeger.delegate = nil; [ball setImage:img]; [UIImageView beginAnimations:nil context:NULL]; [UIImageView setAnimationDuration:0.3]; ball.transform = CGAffineTransformMakeScale(2, 2); [UIImageView commitAnimations]; } So in one method there are 40 of these bits of code and as you can imagine it is not very accurate/fast to respond. So like i said is there a way to draw a border or something around the balloon and detect the collision between the border and dart? Because then i would imagine it would run a lot smother because it would only have to process 5 bits of code. Thanks for any help. This is a big Question so if you can answer it i will buy your app :) Cheers, Harry :/

    Read the article

  • Thumbnails fadein fadeout specific div fade issues

    - by Omikron
    I am using this code to hide and show a div based on which thumbnail you rollover; $(document).ready(function(){ $('div.infodiv').hide(); $(".website_thumbs a").hover( function(){ var name = $(this).attr("name"); $(".infodiv").stop(); $("."+name).fadeIn(); }, function(){ var name = $(this).attr("name"); $("."+name).fadeTo(7000,1).fadeOut(); }); }); The script gets the name attribute from the thumbnail and displays the div with the corresponding class. Each div shares the .infodiv class but also has a class unique to each thumbnail. The functionality is basically where I want it but when you scroll over the thumbnails fast some of the divs get stuck in a kind of half faded-in state and stop working unless i roll over them once - then they slow fade in and they are usable again. I am a bit new to jQuery and would appreciate any help.

    Read the article

  • SQL Server Search Proper Names Full Text Index vs LIKE + SOUNDEX

    - by Matthew Talbert
    I have a database of names of people that has (currently) 35 million rows. I need to know what is the best method for quickly searching these names. The current system (not designed by me), simply has the first and last name columns indexed and uses "LIKE" queries with the additional option of using SOUNDEX (though I'm not sure this is actually used much). Performance has always been a problem with this system, and so currently the searches are limited to 200 results (which still takes too long to run). So, I have a few questions: Does full text index work well for proper names? If so, what is the best way to query proper names? (CONTAINS, FREETEXT, etc) Is there some other system (like Lucene.net) that would be better? Just for reference, I'm using Fluent NHibernate for data access, so methods that work will with that will be preferred. I'm using SQL Server 2008 currently. EDIT I want to add that I'm very interested in solutions that will deal with things like commonly misspelled names, eg 'smythe', 'smith', as well as first names, eg 'tomas', 'thomas'. Query Plan |--Parallelism(Gather Streams) |--Nested Loops(Inner Join, OUTER REFERENCES:([testdb].[dbo].[Test].[Id], [Expr1004]) OPTIMIZED WITH UNORDERED PREFETCH) |--Hash Match(Inner Join, HASH:([testdb].[dbo].[Test].[Id])=([testdb].[dbo].[Test].[Id])) | |--Bitmap(HASH:([testdb].[dbo].[Test].[Id]), DEFINE:([Bitmap1003])) | | |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([testdb].[dbo].[Test].[Id])) | | |--Index Seek(OBJECT:([testdb].[dbo].[Test].[IX_Test_LastName]), SEEK:([testdb].[dbo].[Test].[LastName] >= 'WHITDþ' AND [testdb].[dbo].[Test].[LastName] < 'WHITF'), WHERE:([testdb].[dbo].[Test].[LastName] like 'WHITE%') ORDERED FORWARD) | |--Parallelism(Repartition Streams, Hash Partitioning, PARTITION COLUMNS:([testdb].[dbo].[Test].[Id])) | |--Index Seek(OBJECT:([testdb].[dbo].[Test].[IX_Test_FirstName]), SEEK:([testdb].[dbo].[Test].[FirstName] >= 'THOMARþ' AND [testdb].[dbo].[Test].[FirstName] < 'THOMAT'), WHERE:([testdb].[dbo].[Test].[FirstName] like 'THOMAS%' AND PROBE([Bitmap1003],[testdb].[dbo].[Test].[Id],N'[IN ROW]')) ORDERED FORWARD) |--Clustered Index Seek(OBJECT:([testdb].[dbo].[Test].[PK__TEST__3214EC073B95D2F1]), SEEK:([testdb].[dbo].[Test].[Id]=[testdb].[dbo].[Test].[Id]) LOOKUP ORDERED FORWARD) SQL for above: SELECT * FROM testdb.dbo.Test WHERE LastName LIKE 'WHITE%' AND FirstName LIKE 'THOMAS%' Based on advice from Mitch, I created an index like this: CREATE INDEX IX_Test_Name_DOB ON Test (LastName ASC, FirstName ASC, BirthDate ASC) INCLUDE (and here I list the other columns) My searches are now incredibly fast for my typical search (last, first, and birth date).

    Read the article

  • purpose of 3rd party mvc ?

    - by Honey
    ive seen many third party mvcs or frameworks such as codeignitor , cakephp, and so on. what i want to know is what are their purposes? ive created my own framework call it an mvc or framework (in my opinion their all the same). in my framework i have all the classes in one folder called classes and all functions in another. its all organized and when a new project comes in i am able to complete it fast. i have looked at the applications that i mentioned and it seems to have huge articles and tutorials to study. what is the purpose? why not study the main language such as php, javascript/ajax or jquery, and so on then build something that you know the ins and outs of so that any project comes your way you know what to do. ive known some people who use cakephp and for every project they get stuck and need to figure out what to do. another guy i knew worked with joomla and every basic company website that came his way he would reverse engineer joomla to make it work with the site. are people using these applications because they lack knowledge in the languages? or sometimes have no choice but to make a site while lacking language and put something together.

    Read the article

  • CSS Menu disappear

    - by WtFudgE
    Hi, I created a menu in html/css but where I wanted the subitems to be shown on parent item hover. The problem is when I hover on it in IE it only shows it's subitems when I hover on the text in the menu item, If I hover over the element and not the text the subitems disappear again. So if I hover and want to move my mouse to my submenu the submenu disappears unless I'm fast enough. This is very annoying, does anyone know how I can solve this? MY menu code is like so: <ul id="leftnav"> Item1 SubItem1 SubItem2 SubItem3 Item2 SubItem1 SubItem2 SubItem3 The menu should be a left sided menu which shows it's subitems only on hover, so I used css to achieve this with the following code: #leftnav, #leftnav ul { padding: 0; margin: 0; } #leftnav ul li { margin-left: 102px; position: relative; top: -19px; /*sets the childitems on the same height as the parent item*/ } #leftnav li { float: left; width: 100px; } #leftnav ul { position: absolute; width: 100px; left: -1000px; /*makes it disappear*/ } #leftnav li:hover ul, #leftnav li.ie_does_hover ul { left: auto; } #leftnav a { display: block; height: 15px; margin-top: 2px; margin-bottom: 2px; } Since this only works with firefox I also had to insert a javascript to get this to work in IE using code: <script language="JavaScript"> sfHover = function() { var sfElsE = document.getElementById("leftnav").getElementsByTagName("LI"); for (var i=0; i<sfElsE.length; i++) { sfElsE[i].onmouseover=function() { this.className+=" ie_does_hover"; } sfElsE[i].onmouseout=function() { this.className=this.className.replace(new RegExp(" ie_does_hover\\b"), ""); } } } if (window.attachEvent) window.attachEvent("onload", sfHover); </script> Many many many thanks for replies

    Read the article

  • Fastest Method to Learn Web Design for a Developer

    - by hekevintran
    I am a Web developer and in my projects I have noticed that my weakest point is not being good at the front-end design. Relying on other designers can be annoying if they are not able to produce as quickly as I want. My perspective on HTML/CSS is that it is basically a big hack that amazingly works. There are too many CSS and browser specific bugs/quirks to learn and remember them all without spending extreme amounts of time trying to untangle everything. Is there a fast track route to getting CSS into my brain? I have looked at some CSS books, but to me they really read as long lists of how to render things correctly in IE6 and how to make corners rounded. (Seriously why does it require so many tricks to make a sharp corner round? On any platform but the Web this would be called a major oversight.) Does there exist something that does the analogous to CSS that jQuery does for JavaScript? Using jQuery you don't need to know JavaScript well to make things that work. I am not interested in learning why IE6 does things in weird ways because I don't care about supporting it at all. I am more interested in a method of learning how to use CSS to do what I want without spending hours and hours reading obscure blogs.

    Read the article

  • Google Code Jam 2010 Large DataSets Take Too Long to Submit

    - by Travis
    Hey Guys, I'm participating in the 2010 code jam and I solved two of the problems for the small data sets, but I'm not even close to solving the large data sets in the 8 minute time frame. I'm wondering if anyone out there has solved the large data set: What hardware were you running on? What language were you running on? What performance tuning techniques did you do on your code to run as fast as possible? I'm writing the solutions in Ruby, which is not my day to day language, and executing them on my Macbook Pro. My solutions for problem A and problem C are on github at http://github.com/tjboudreaux/codejam2010. I'd appreciate any suggestions that you may have. FWIW, I have alot of experience in C++ from college, my primary language is PHP, and my "sandbox" language is Ruby. Was I just a bit ambitious by taking a shot at this in Ruby, not knowing where the language struggles for performance, or does anyone see anything that's a redflag as to why I can't complete the large dataset in time to submit.

    Read the article

  • Why do people hate SQL cursors so much?

    - by Steven A. Lowe
    I can understand wanting to avoid having to use a cursor due to the overhead and inconvenience, but it looks like there's some serious cursor-phobia-mania going on where people are going to great lengths to avoid having to use one for example, one question asked how to do something obviously trivial with a cursor and the accepted answer proposed using a common table expression (CTE) recursive query with a recursive custom function, even though this limits the number of rows that could be processed to 32 (due to recursive call limit in sql server). This strikes me as a terrible solution for system longevity, not to mention a tremendous effort just to avoid using a simple cursor. what is the reason for this level of insane hatred? has some 'noted authority' issued a fatwa against cursors? does some unspeakable evil lurk in the heart of cursors that corrupts the morals of the children or something? wiki question, more interested in the answer than the rep thanks in advance! Related Info: http://stackoverflow.com/questions/37029/sql-server-fast-forward-cursors EDIT: let me be more precise: I understand that cursors should not be used instead of normal relational operations, that is a no-brainer. What I don't understand is people going waaaaay out of their way to avoid cursors like they have cooties or something, even when a cursor is a simpler and/or more efficient solution. It's the irrational hatred that baffles me, not the obvious technical efficiencies.

    Read the article

  • How can I serialize and communicate ActiveRecord instances across identical Rails apps?

    - by Blaine LaFreniere
    The main idea is that I have several worker instances of a Rails app, and then a main aggregate I want to do something like this with the following pseudo pseudo-code posts = Post.all.to_json( :include => { :comments => { :include => :blah } }) # send data to another, identical, exactly the same Rails app # ... # Fast forward to the separate but identical Rails app: # ... # remote_posts is the posts results from the first Rails app posts = JSON.parse(remote_posts) posts.each do |post| p = Post.new p = post p.save end I'm shying away from Active Resource because I have thousands of records to create, which would mean thousands of requests for each record. Unless there is a way to do it all in one request with Active Resource that is simple, I'd like to avoid it. Format doesn't matter. Whatever makes it convenient. The IDs don't need to be sent, because the other app will just be creating records and assigning new IDs in the "aggregate" system. The hierarchy would need to be preserved (E.g. "Hey other Rails app, I have genres, and each genre has an artist, and each artist has an album, and each album has songs" etc.)

    Read the article

  • process of connecting RTP with SIP via SDP & land lines

    - by TacB0sS
    Hello to everyone, I have a problem with starting a media session and to combine it with my SIP client. I've designed a recursive SIP client that reuse the same request template to send the next requests to server, according to the acceptable sequences noted in the RFC's, and examples that I read. as far as I could tell the SIP part is working fine registers to server invites, and authenticates. I didn't complete any calls to clients yet because of the content header needs to be filled up (which I didn't yet so I get a 503 from the server which is OK I guess). for a long time I didn't know where to start with the media session, and slowly learned how to use the JMF and I've constructed an object that handles RTP transmitting, now I'm standing at the cross road, on the one hand I have my SIP signaling but it needs the SDP content header to complete the invite, and on the other I have the RTP which is knows how to p2p. For me to complete my design I require your help with the following questions: Is there an easy//a simple//an implemented way to convert the Audio/Video format from the JMF into SDP media headers? or even a generator that I would input all the parameters for the content header, and it would generate a content header fast, or do I have to implement this myself? Once I've finished constructing the SDK and the SIP is up and running and I get an OK response from the server (after ringing and all), how do I start the media session? do I connect p2p according to caller details I send in the SIP invite? If 2 is correct, then how does a connection to land lines would be? does land lines knows that once they send an OK back to server they listen/start RTP session on a specific port? Or did I get everything wrong? :-/ I really appreciate any help I could I get, I looked every where for answers but they are not clear, they ignore question 2 as if it was an obvious thing, but for me it just isn't. Thank in advance, Adam Zehavi.

    Read the article

  • Cassandra random read speed

    - by Jody Powlette
    We're still evaluating Cassandra for our data store. As a very simple test, I inserted a value for 4 columns into the Keyspace1/Standard1 column family on my local machine amounting to about 100 bytes of data. Then I read it back as fast as I could by row key. I can read it back at 160,000/second. Great. Then I put in a million similar records all with keys in the form of X.Y where X in (1..10) and Y in (1..100,000) and I queried for a random record. Performance fell to 26,000 queries per second. This is still well above the number of queries we need to support (about 1,500/sec) Finally I put ten million records in from 1.1 up through 10.1000000 and randomly queried for one of the 10 million records. Performance is abysmal at 60 queries per second and my disk is thrashing around like crazy. I also verified that if I ask for a subset of the data, say the 1,000 records between 3,000,000 and 3,001,000, it returns slowly at first and then as they cache, it speeds right up to 20,000 queries per second and my disk stops going crazy. I've read all over that people are storing billions of records in Cassandra and fetching them at 5-6k per second, but I can't get anywhere near that with only 10mil records. Any idea what I'm doing wrong? Is there some setting I need to change from the defaults? I'm on an overclocked Core i7 box with 6gigs of ram so I don't think it's the machine. Here's my code to fetch records which I'm spawning into 8 threads to ask for one value from one column via row key: ColumnPath cp = new ColumnPath(); cp.Column_family = "Standard1"; cp.Column = utf8Encoding.GetBytes("site"); string key = (1+sRand.Next(9)) + "." + (1+sRand.Next(1000000)); ColumnOrSuperColumn logline = client.get("Keyspace1", key, cp, ConsistencyLevel.ONE); Thanks for any insights

    Read the article

  • Small-o(n^2) implementation of Polynomial Multiplication

    - by AlanTuring
    I'm having a little trouble with this problem that is listed at the back of my book, i'm currently in the middle of test prep but i can't seem to locate anything regarding this in the book. Anyone got an idea? A real polynomial of degree n is a function of the form f(x)=a(n)x^n+?+a1x+a0, where an,…,a1,a0 are real numbers. In computational situations, such a polynomial is represented by a sequence of its coefficients (a0,a1,…,an). Assuming that any two real numbers can be added/multiplied in O(1) time, design an o(n^2)-time algorithm to compute, given two real polynomials f(x) and g(x) both of degree n, the product h(x)=f(x)g(x). Your algorithm should **not** be based on the Fast Fourier Transform (FFT) technique. Please note it needs to be small-o(n^2), which means it complexity must be sub-quadratic. The obvious solution that i have been finding is indeed the FFT, but of course i can't use that. There is another method that i have found called convolution, where if you take polynomial A to be a signal and polynomial B to be a filter. A passed through B yields a shifted signal that has been "smoothed" by A and the resultant is A*B. This is supposed to work in O(n log n) time. Of course i am completely unsure of implementation. If anyone has any ideas of how to achieve a small-o(n^2) implementation please do share, thanks.

    Read the article

  • rs232 communication, general timing question

    - by Sunny Dee
    Hi, I have a piece of hardware which sends out a byte of data representing a voltage signal at a frequency of 100Hz over the serial port. I want to write a program that will read in the data so I can plot it. I know I need to open the serial port and open an inputstream. But this next part is confusing me and I'm having trouble understanding the process conceptually: I create a while loop that reads in the data from the inputstream 1 byte at a time. How do I get the while loop timing so that there is always a byte available to be read whenever it reaches the readbyte line? I'm guessing that I can't just put a sleep function inside the while loop to try and match it to the hardware sample rate. Is it just a matter of continuing reading the inputstream in the while loop, and if it's too fast then it won't do anything (since there's no new data), and if it's too slow then it will accumulate in the inputstream buffer? Like I said, i'm only trying to understand this conceptually so any guidance would be much appreciated! I'm guessing the idea is independent of which programming language I'm using, but if not, assume it is for use in Java. Thanks!

    Read the article

  • Specific programming text editor for simple open/close editing

    - by queen3
    I'm looking for very specific text editor: Closes on ESC, no project management or tabs Syntax highlighting - preferably with color themes (e.g. can apply different color themes without changing C# coloring definition) or, at least, can load/save themes; support for C/C#/XML/HTML/JavaScript/etc - common MS/.NET world - out of box Configurable keys, or: Shift-Tab shifts blocks XML/HTML auto-completion support - well, optional I use synplus plugin for Total Commander currently, but it has few drawbacks (e.g. crashes sometimes ;-), no auto-completion, etc). Basically I want fast Visual-Studio-like editor that I open, do edits, and then close using ESC. I remember I tried Notepad++, etc - most of them open files in tabs, don't close on ESC... - that is, behave like IDE. At least I've just downloaded Notepad++, it doesn't close on ESC even if I setup keybindings to do so. Autocompletion is optional (though it is to be simple as just tags completion), what I really look for is closing on ESC, not getting in the way with all the tabs and IDE-like, and good coloring. Plus shift-tab is must have for blocks manipulation. Update: any open-source one that I can easily tweak to close on ESC? ;-) Seems like ESC (and reasonable color highlighting) is the core requirement. I've just tried many editors - Programmer's Notepad, E, Crimson, etc - I can't set any of them to close on ESC. Any external tool to close selected program on ESC? ;-) UPDATE: Hm, found an awesome utility for my latest thought: http://www.autohotkey.com. Easy to setup to close any window on ESC (as well as many other tricks). Seems like the most tough requirements is gone - I can use ANY text editor ;-)

    Read the article

  • jQuery - Could use a little help with a content loader

    - by Kenny Bones
    Hi, I'm not very elite when it comes to JavaScript, especially the syntax. So I'm trying to learn. And in this process I'm trying to implement a content loader that basically removes all content from a div and inserts content from another div from a different document. I've tried to do this on this site: www.matkalenderen.no - Check the butt ugly link there. See what happens? I've taken the example from this site: http://nettuts.s3.cdn.plus.org/011_jQuerySite/sample/index.html#index But I'm not sure this example actually works the way I think it does. I mean, if the code just wipes out existing content from a div and inserts content from another div, why does the other webpages in this example include doctype and heading etc etc? Wouldn't you just need the div and it's content? Without all the other stuff "around"? Maybe I don't get how this works though. Thought it worked mosly like include really. This is my code however: $(document).ready(function() { var hash = window.location.hash.substr(1); var href = $('#dynloader a').each(function(){ var href = $(this).attr('href'); if(hash==href.substr(0,href.length-5)){ var toLoad = hash+'.html #container'; $('#container').load(toLoad) } }); $('#dynloader a').click(function(){ var toLoad = $(this).attr('href')+' #container'; $('#container').hide('fast',loadcontainer); $('#load').remove(); $('#wrapper').append('<span id="load">LOADING...</span>'); $('#load').fadeIn('normal'); window.location.hash = $(this).attr('href').substr(0,$(this).attr('href').length-5); function loadcontainer() { $('#container').load(toLoad,'',showNewcontainer()) } function showNewcontainer() { $('#container').show('normal',hideLoader()); } function hideLoader() { $('#load').fadeOut('normal'); } return false; }); });

    Read the article

< Previous Page | 222 223 224 225 226 227 228 229 230 231 232 233  | Next Page >