Search Results

Search found 19913 results on 797 pages for 'bit packing'.

Page 188/797 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • Can the size of a structure change after compiled?

    - by Sarah Altiva
    Hi, suppose you have the following structure: #include <windows.h> // BOOL is here. #include <stdio.h> typedef struct { BOOL someBool; char someCharArray[100]; int someIntValue; BOOL moreBools, anotherOne, yetAgain; char someOthercharArray[23]; int otherInt; } Test; int main(void) { printf("Structure size: %d, BOOL size: %d.\n", sizeof(Test), sizeof(BOOL)); } When I compile this piece of code in my machine (32-bit OS) the output is the following: Structure size: 148, BOOL size: 4. I would like to know if, once compiled, these values may change depending on the machine which runs the program. E.g.: if I ran this program in a 64-bit machine, would the output be the same? Or once it's compiled it'll always be the same? Thank you very much, and forgive me if the answer to this question is obvious...

    Read the article

  • How to split this string and identify first sentence after last '*'?

    - by DaveDev
    I have to get a quick demo for a client, so this is a bit hacky. Please don't flame me too much! :-) I'm getting a string similar to the following back from the database: The object of the following is to do: * blah 1 * blah 2 * blah 3 * blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. I need to format this so that each * represents a bullet point, and the sentence after the last * goes onto a new line, ideally as follows: The object of the following is to do: blah 1 (StackOverflow wants to add bullet points here, but I just need '*') blah 2 blah 3 blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. It's easy enough to split the string by the * character and replace that with <br /> *. I'm using the following for that: string description = GetDescription(); description = description.Replace("*", "<br />*"); // it's going onto a web page. but the result this gives me is: The object of the following is to do: blah 1 blah 2 blah 3 blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. I'm having a bit of difficulty identifying the fist sentence after the last '*' so I can put a break there too. Can somebody show me how to do this?

    Read the article

  • ASP.NET DynamicData: Whats happening during an update?

    - by Jens A.
    I am using ASP.NET DynamicData (based on LINQ to SQL) on my site for basic scaffolding. On one table I have added additional properties, that are not stored in the table, but are retrieved from somewhere else. (Profile information for a user account, in this case). They are displayed just fine, but when editing these values and pressing "Update", they are not changed. Here's what the properties look like, the table is the standard aspnet_Users table: public String Address { get { UserProfile profile = UserProfile.GetUserProfile(UserName); return profile.Address; } set { UserProfile profile = UserProfile.GetUserProfile(UserName); profile.Address = value; profile.Save(); } } When I fired up the debugger, I've noticed that for each update the set accessor is called three times. Once with the new value, but on a newly created instance of user, then once with the old value, again on an new instance, and finally with the old value on the existing instance. Wondering a bit, I checked with the properties created by the designer, and they, too, are called three times in (almost) the same fashion. The only difference is, that the last call contains the new value for the property. I am a bit stumped here. Why three times, and why are my new properties behaving differently? I'd be grateful for any help on that matter! =)

    Read the article

  • Is there a way to receive data as unsigned char over UDP on Qt?

    - by user269037
    I need to send floating point numbers using a UDP connection to a Qt application. Now in Qt the only function available is qint64 readDatagram ( char * data, qint64 maxSize, QHostAddress * address = 0, quint16 * port = 0 ) which accepts data in the form of signed character buffer. I can convert my float into a string and send it but it will obviously not be very efficient converting a 4 byte float into a much longer sized character buffer. I got hold of these 2 functions to convert a 4 byte float into an unsinged 32 bit integer to transfer over network which works fine for a simple C++ UDP program but for Qt I need to receive the data as unsigned char. Is it possible to avoid converting the floatinf point data into a string and then sending it? uint32_t htonf(float f) { uint32_t p; uint32_t sign; if (f < 0) { sign = 1; f = -f; } else { sign = 0; } p = ((((uint32_t)f)&0x7fff)<<16) | (sign<<31); // Whole part and sign. p |= (uint32_t)(((f - (int)f) * 65536.0f))&0xffff; // Fraction. return p; } float ntohf(uint32_t p) { float f = ((p>>16)&0x7fff); // Whole part. f += (p&0xffff) / 65536.0f; // Fraction. if (((p>>31)&0x1) == 0x1) { f = -f; } // Sign bit set. return f; }

    Read the article

  • rake db:migrate fails when trying to do inserts

    - by anthony
    I'm trying to get a database populate so I can begin working on a project. THis project is already built and I'm being brought in to helpwith front-end work. Problem is I can't get rake db:migrate to do any inserts. Every time I run rake db:migrate I get this: ... == 20081220084043 CreateTimeDimension: migrating ============================== -- create_table(:time_dimension) - 0.0870s INSERT time_dimension(time_key, year, month, day, day_of_week, weekend, quarter) VALUES(20080101, 2008, 1, 1, 'Tuesday', false, 1) rake aborted! Could not load driver (uninitialized constant Mysql::Driver) ... I'm building on a MBP with Snow Leopard. I've installed XCode from the disk that comes with the mac. I've updated ruby, installed rails and all the needed gems. I have the 64 bit version of MySQL installed. I've tried the 32 bit version of MySQL and I've even tried installing from macports (via http://www.robbyonrails.com/articles/2010/02/08/installing-ruby-on-rails-passenger-postgresql-mysql-oh-my-zsh-on-snow-leopard-fourth-edition) The mysql gemis installed using: sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/path/to/mysql/bin/mysql_config the migrate creates the tables just fine but it dies every. single. time. it trys an insert. Any help would be great

    Read the article

  • Logic: Best way to sample & count bytes of a 100MB+ file

    - by Jami
    Let's say I have this 170mb file (roughly 180 million bytes). What I need to do is to create a table that lists: all 4096 byte combinations found [column 'bytes'], and the number of times each byte combination appeared in it [column 'occurrences'] Assume two things: I can save data very fast, but I can update my saved data very slow. How should I sample the file and save the needed information? Here're some suggestions that are (extremely) slow: Go through each 4096 byte combinations in the file, save each data, but search the table first for existing combinations and update it's values. this is unbelievably slow Go through each 4096 byte combinations in the file, save until 1 million rows of data in a temporary table. Go through that table and fix the entries (combine repeating byte combinations), then copy to the big table. Repeat going through another 1 million rows of data and repeat the process. this is faster by a bit, but still unbelievably slow This is kind of like taking the statistics of the file. NOTE: I know that sampling the file can generate tons of data (around 22Gb from experience), and I know that any solution posted would take a bit of time to finish. I need the most efficient saving process

    Read the article

  • md5_file() not working with IP addresses?

    - by Rob
    Here is my code relating to the question: $theurl = trim($_POST['url']); $md5file = md5_file($theurl); if ($md5file != '96a0cec80eb773687ca28840ecc67ca1') { echo 'Hash doesn\'t match. Incorrect file. Reupload it and try again'; When I run this script, it doesn't even output an error. It just stops. It loads for a bit, and then it just stops. Further down the script I implement it again, and it fails here, too: while($row=mysql_fetch_array($execquery, MYSQL_ASSOC)){ $hash = @md5_file($row['url']); $url = $row['url']; mysql_query("UPDATE urls SET hash='" . $hash . "' WHERE url='" . $url . "'") or die("MYSQL is indeed gay: ".mysql_error()); if ($hash != '96a0cec80eb773687ca28840ecc67ca1'){ $status = 'down'; }else{ $status = 'up'; } mysql_query("UPDATE urls SET status='" . $status . "' WHERE url='" . $url . "'") or die("MYSQL is indeed gay: ".mysql_error()); } And it checks all the URL's just fine, until it gets to one with an IP instead of a domain, such as: http://188.72.215.195/config.php In which, again, the script then just loads for a bit, and then stops. Any help would be much appreciated, if you need any more information just ask.

    Read the article

  • Handling primary key duplicates in a data warehouse load

    - by Meff
    I'm currently building an ETL system to load a data warehouse from a transactional system. The grain of my fact table is the transaction level. In order to ensure I don't load duplicate rows I've put a primary key on the fact table, which is the transaction ID. I've encountered a problem with transactions being reversed - In the transactional database this is done via a status, which I pick up and I can work out if the transaction is being done, or rolled back so I can load a reversal row in the warehouse. However, the reversal row will have the same transaction ID and so I get a primary key violation. I've solved this for now by negating the primary key, so transaction ID 1 would be a payment, and transaction ID -1 (In the warehouse only) would be the reversal. I have considered an alternative of generating a BIT column, where 0 is normal and 1 is reversal, then making the PK the transaction ID and the BIT column. My question is, is this a good practice, and has anyone else encountered anything like this? For reference, this is a payment processing system, so values will not be modified, so there will only ever be transactions and reversals.

    Read the article

  • Patterns and Libraries for working with raw UI values.

    - by ProfK
    By raw values, I mean the application level values provided by UI controls, such as the Text property on a TextBox. Too often I find myself writing code to check and parse such values before they get used as a business level value, e.g. PaymentTermsNumDays. I've mitigated a lot of the spade work with rough and ready extension methods like String.ToNullableInt, but we all know that just isn't right. We can't put the whole world on String's shoulders. Do I look at tasking my UI to provide business values, using a ruleset pushed out from the server app, or open my business objects up a bit to do the required sanitising etc. as they required? Neither of these approaches sits quite right with me; the first seems closer to ideal, but quite a bit of work, while the latter doesn't show much respect to the business objects' single responsibility. The responsibilities of the UI are a closer match. Between these extremes, I could also just implement another DTO layer, an IoC container with sanitising and parsing services, derive enhanced UI controls, or stick to copy and paste inline drudgery.

    Read the article

  • Where to turn upon realizing I can't program my way out of a paper bag?

    - by luminarious
    I have no job and just enough money to get by until April or so. While looking for work, I figured I might as well go through with a pet project, a browser based card game. Make it nice and free, collect donations and maybe earn enough for a movie ticket to escape reality for a while. I have dabbled in web development a bit. I can make simple stuff happen with JS/PHP if I follow tutorials. I designed my own art blog's template - http://luminarious.tumblr.com. I can visualise the game working in my head, flowcharts and everything. But then I tried to go deeper with Javascript and almost had an aneurysm before understanding what a closure is. Wether I suck at learning, have ADD or fail epically at productivity, I have not got much done. Coming up with ideas, screen mock-ups and so forth was very enjoyable, but actual implementation.. not so much. In fact, I cry a bit every time I think about the time someone competent could have finished this in. I'd like to excuse myself with my ENTP personality type, but that hardly solves anything. Rather, I'd like to know to get from A (bunch of ideas with little semblance to a web app) to B (something to proudly show others) while being unable to pay anyone? Are there any secret techniques for learning? Is there any way to get mentoring or code review? Is there anyone with too much free time willing to code for me? How to trust someone to not steal my code when I ask for assistance? Is there anything I should have asked instead of any of those?

    Read the article

  • Utility to indexing a directory?

    - by achacha
    Here is what I am trying to do: I have a directory (with sub-directories) with source files, I need to index them so I can find files fast (find as I type) so I can open them for compare/analysis. I don't want it to scan the content, just filename index for quick lookup. I do this when trying to determine if a class exists in a given tree (we maintain directory trees for each release which has a lot of files) and sometimes I want to quickly check files to see how something was implemented, etc. Most of these directories are on remote servers (sometimes on the other side of the world) or on a VM (which is on a server far away), so I only want to read the directory trees once, which is why running find every time is way too slow and doing 'find . foo.txt' and then searching that is a bit tedious. It's kind of like how "Find Resource" works in eclipse after it indexes all files, but it's a bit of a chore to import/remove directories into eclipse every time. Eclipse is also very slow when dealing with remote volumes. Any suggestions are appreciated :)

    Read the article

  • Android Plugin for Eclipse problem

    - by tbneff
    I am using a laptop with Windows 7 Home Prem x64. I have installed Java JDK1.6.0_18 and Eclipse Gallileo. I have downloaded and installed the latest version of Android SDK with several Platforms loaded and a AVD defined. I can install the Android Eclipse plugin from the remote site stated in the instructions. The plugin installation performs without any errors and I can verify that the plugins are indeed installed. My problem begins when I go to Windows - Preferences, there is no Android section to configure. And when I go to File - New - Project, there is no Android Project to choose. I have uninstalled the plugin and reinstalled at least 10 times, trying different things and still no luck. I originally had the 64 bit version of the JDK installed, but removed it and installed the 32 bit version. Has anyone heard of this type of problem? Is it because I am using Windows 7? Thanks for any help. tbneff

    Read the article

  • Coding the R-ight way - avoiding the for loop

    - by mropa
    I am going through one of my .R files and by cleaning it up a little bit I am trying to get more familiar with writing the code the r-ight way. As a beginner, one of my favorite starting points is to get rid of the for() loops and try to transform the expression into a functional programming form. So here is the scenario: I am assembling a bunch of data.frames into a list for later usage. dataList <- list (dataA, dataB, dataC, dataD, dataE ) Now I like to take a look at each data.frame's column names and substitute certain character strings. Eg I like to substitute each "foo" and "bar" with "baz". At the moment I am getting the job done with a for() loop which looks a bit awkward. colnames(dataList[[1]]) [1] "foo" "code" "lp15" "bar" "lh15" colnames(dataList[[2]]) [1] "a" "code" "lp50" "ls50" "foo" matchVec <- c("foo", "bar") for (i in seq(dataList)) { for (j in seq(matchVec)) { colnames (dataList[[i]])[grep(pattern=matchVec[j], x=colnames (dataList[[i]]))] <- c("baz") } } Since I am working here with a list I thought about the lapply function. My attempts handling the job with the lapply function all seem to look alright but only at first sight. If I write f <- function(i, xList) { gsub(pattern=c("foo"), replacement=c("baz"), x=colnames(xList[[i]])) } lapply(seq(dataList), f, xList=dataList) the last line prints out almost what I am looking for. However, if i take another look at the actual names of the data.frames in dataList: lapply (dataList, colnames) I see that no changes have been made to the initial character strings. So how can I rewrite the for() loop and transform it into a functional programming form? And how do I substitute both strings, "foo" and "bar", in an efficient way? Since the gsub() function takes as its pattern argument only a character vector of length one.

    Read the article

  • hash fragments and collisions cont.

    - by Mark
    For this application I've mine I feel like I can get away with a 40 bit hash key, which seems awfully low, but see if you can confirm my reasoning (I want a small key because I want a small filename and the key will be converted to a filename): (Note: only accidental collisions a concern - no security issues.) A key point here is that the population in question is divided into groups, and a collision is only relevant if it occurs within the same group. A "group" is a directory on a user's system (the contents of files are hashed and a collision is only relevant if it occurs for files within the same directory). So with speculating roughly 100,000 potential users, say 2^17, that corresponds to 2^18 "groups" assuming 2 directories per user on average. So with a 40 bit key I can expect 2^(20+9) files created (among all users) before a collision occurs for some user somewhere. (Or IOW 2^((40+18)/2), due to the "birthday effect".) That's an average 4096 unique files created per user, for 2^17 users, before a single collision occurs for some user somewhere. And then that long again before another collision occurs somewhere (right?)

    Read the article

  • Good C++ array class for dealing with large arrays of data in a fast and memory efficient way?

    - by Shane MacLaughlin
    Following on from a previous question relating to heap usage restrictions, I'm looking for a good standard C++ class for dealing with big arrays of data in a way that is both memory efficient and speed efficient. I had been allocating the array using a single malloc/HealAlloc but after multiple trys using various calls, keep falling foul of heap fragmentation. So the conclusion I've come to, other than porting to 64 bit, is to use a mechanism that allows me to have a large array spanning multiple smaller memory fragments. I don't want an alloc per element as that is very memory inefficient, so the plan is to write a class that overrides the [] operator and select an appropriate element based on the index. Is there already a decent class out there to do this, or am I better off rolling my own? From my understanding, and some googling, a 32 bit Windows process should theoretically be able address up to 2GB. Now assuming I've 2GB installed, and various other processes and services are hogging about 400MB, how much usable memory do you think my program can reasonably expect to get from the heap? I'm currently using various flavours of Visual C++.

    Read the article

  • Ruby on Rails or PHP... Somehow uncertain

    - by Clamm
    Hi everybody... I've been thinking about this a long time now but i wanna hear your opinion because i always received the ebst answers here. So in advance... thank you guys. Right now i have to make this decision: Shift a prototype webservice to production quality. Choose either Ruby or PHP... (Background: A friend of mine is joining the project and prefers rails) I've already played around a bit with RoR (only basic stuff) but i am really disappointed about the documentation of Rails and Ruby. In relation to PHP i find only fragments or hard to use references. At the end i am a bit scared. I dont wanna waste my time realizing that i am not capable of doing s.th in Ruby what i could with PHP. Maybe only because i am too stupid and don't find a proper explanation ;-) Did anyone experience this shift and can tell me how easy/hard it was to switch from PHP to Ruby? E.G. would you recommend programming it in PHP and using MVC as a base pattern? Thanks for your opinion!!!

    Read the article

  • Limiting number of text lines in a table cell

    - by Kuzco
    I have a table cell where I need to limit the text to a max of two lines. I tried achieving this by placing an inner div with a limited height: div { border: 1px solid #EECCDD; width: 100px; height: 40px; overflow: hidden; } <div> <p>bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla</p> </div> <div> <p>bla bla bla bla</p> </div> However, in this case the cells which have only one line of text are not vertically aligned to the middle. I know there are ways to vertically align a text within a div, but most of the ones I found seemed a bit complicated and/or hacky (like this one), and felt like a bit of an overkill. Is there a different way to effectively limit the number of lines inside the cell, or a simple way to align the text in the way I did it?

    Read the article

  • Image resizing efficiency in C# and .NET 3.5

    - by Matthew Nichols
    I have written a web service to resize user uploaded images and all works correctly from a functional point of view, but it causes CPU usage to spike every time it is used. It is running on Windows Server 2008 64 bit. I have tried compiling to 32 and 64 bit and get about the same results. The heart of the service is this function: private Image CreateReducedImage(Image imgOrig, Size NewSize) { var newBM = new Bitmap(NewSize.Width, NewSize.Height); using (var newGrapics = Graphics.FromImage(newBM)) { newGrapics.CompositingQuality = CompositingQuality.HighSpeed; newGrapics.SmoothingMode = SmoothingMode.HighSpeed; newGrapics.InterpolationMode = InterpolationMode.HighQualityBicubic; newGrapics.DrawImage(imgOrig, new Rectangle(0, 0, NewSize.Width, NewSize.Height)); } return newBM; } I put a profiler on the service and it seemed to indicate the vast majority of the time is spent in the GDI+ library itself and there is not much to be gained in my code. Questions: Am I doing something glaringly inefficient in my code here? It seems to conform to the example I have seen. Are there gains to be had in using libraries other than GDI+? The benchmarks I have seen seem to indicate that GDI+ does well compare to other libraries but I didn't find enough of these to be confident. Are there gains to be had by using "unsafe code" blocks? Please let me know if I have not included enough of the code...I am happy to put as much up as requested but don't want to be obnoxious in the post.

    Read the article

  • How to use different providers for Linq to entities?

    - by Anders Svensson
    I'm trying to familiarize myself a bit more with database programming, and I'm looking at different ways of creating a data access layer for applications. I've tried out a few ways but there is such a jungle of different database technologies that I don't know what to learn. For instance I've tried using datasets with tableadapters. Using that I am able to switch data provider rather easily (by programming against the interfaces such as IDbConnection). This is one thing I would want to achieve. But I also know everyone's talking about LINQ, and I'm trying to get to know that a bit better too. So I have tried using Linq to Sql classes as the data access layer as well, but apparently this is not provider independent (works only for SQL Server). So then I read about the Entity Framework (which just as Linq to SQL apparently has gotten its share of bashing already...). It's supposed to be provider independent everybody says, but how? I tried out a tutorial to create an entity data model, but the only providers to choose from were SQL Server/Express. Just for learning purposes, I would like to know how to use the entity framework with MS Access/OleDb. Also, I would appreciate some input on what is the preferred database technology for data access. Is it LINQ still after all the bashing, or should you just use datasets because they are provider independent? Any pointers for what to learn would be great, because it's just too much to learn it all if I'm not going to use it in the end...!

    Read the article

  • Calculating with a variable outside of its bounds in C

    - by aquanar
    If I make a calculation with a variable where an intermediate part of the calculation goes higher then the bounds of that variable type, is there any hazard that some platforms may not like? This is an example of what I'm asking: int a, b; a=30000; b=(a*32000)/32767; I have compiled this, and it does give the correct answer of 29297 (well, within truncating error, anyway). But the part that worries me is that 30,000*32,000 = 960,000,000, which is a 30-bit number, and thus cannot be stored in a 16-bit int. The end result is well within the bounds of an int, but I was expecting that whatever working part of memory would have the same size allocated as the largest source variables did, so an overflow error would occur. This is just a small example to show my problem, I am trying to avoid using floating points by making the fraction be a fraction of the max amount able to be stored in that variable (in this case, a signed integer, so 32767 on the positive side), because the embedded system I'm using I believe does not have an FPU. So how do most processors handle calculations out of the bounds of the source and destination variables?

    Read the article

  • HTML5 card game [closed]

    - by ChrisCa
    I created a card game in silverlight a year or so ago in order to learn a bit about Silverlight. I am now wanting to make a HTML5 version of the game in an effort to learn a little bit more about that. I am thinking I'd like to take advantage of stuff like Knockout.js and WebSockets and the canvas element. Now what I'm confused about is how to lay out the cards on the screen. With Silverlight I was able to make a "Hand" control, which was made up of two sub controls - the cards the player has in their hand and the ones they have on the table. And they in turn were made up of Card controls. Now I don't believe there is the concept on a User Control in javascript. So I am possibly thinking about this in entirely the wrong way. So my question is - how could I lay out some cards on the table and perhaps make reuse of something for each player? I have a client side JSON object called game, which contains an array of players. Each player has a hand which is made up of an array of in-hand cards and on-table cards. Ideally I would like to bind these to something using Knockout.js - but I don't know what I could bind to. Would I simply position images (of cards) on a canvas? Is there a way to make some kind of Hand object that each player could have and that I could bind to? Any advice? Or sample code you've seen elsewhere?

    Read the article

  • How to keep jquery from descending too far?

    - by Earlz
    Hello, I am having a problem with jquery going a bit too far on pattern matching of CSS classes and IDs. I have some markup that looks like this: <div id="blah"> <div class="level2"> <input type="text" /> </div> <div class="levelA"> <div class="level2"> <input type="text" value="foo"/> </div> </div> </div> <input type="text" value="bar" /> I want for the 3 inputs to say Hello Foo Bar so I have this line of jquery: $('#blah .level2 input').val('hello'); the problem now is that jquery is a bit too liberal in it's pattern matches and matches both the first and second. How can I prevent this kind of thing from happening? A live example is at http://jsbin.com/opelo3/4

    Read the article

  • 'For' Loop Within a 'For' Loop and Then Another?

    - by BenjaminFranklin
    This has confused me quite a bit, but it's also really interesting! I want to loop through a grid of 9 elements in an array, multiply them all by 1/9. Then, I want to find the sum of those 9 elements and replace each individual element's with the value calculated as the sum. After I've done this, I want to move on to the next nine elements. To clarify, I want all 9 elements to be changed to myVal, in the code below. So far I've got the loop within a loop bit, but I don't know how to then go back and replace each of the values with the sum of all of them combined. Here's my code:- previousx = 0; previousy= 0; for (int x = previousx; x < previousx+4; x++) { for(int y = previousy; y < previousy+4; y++) { y = y*(1/9); yVal += y; } } Any advice would, of course, be greatly appreciated.

    Read the article

  • Matching a String and then incrementing a number within HTML elements

    - by Abs
    Hello all, I have tags in a html list, here is an example of two tags. <div class="tags"> <ul> <li> <a onclick="tag_search('tag1');" href="#">tag1 <span class="num-active">1</span></a> </li> <li> <a onclick="tag_search('tag2');" href="#">tag2 <span class="num-active">1</span></a> </li> </ul> </div> I would like to write a function that I can pass a string to, that will match the strings in the a hyperlink i.e. "tag1" or "tag2", if there is a match then increment the number in the span, if not then add a new li. The bit I am having trouble with is how do I search for a string in the div with class tags and then when I find a match identifying the element. I can't even do the first bit as I am use to using an ID or a Class. I appreciate any help on this using JQuery Thanks all Code so far function change_tag_count(item){ alert(item);//alerts the string test $.fn.searchString = function(str) { return this.filter('*:contains("' + item + '")'); }; if($('body').searchString(item).length){ var n = $('a').searchString(item).children().text(); n = parseInt(n) + 1; $('a').searchString(item).children().text(n); }else{ alert('here');//does not alert this when no li contains the word test $("#all_tags ul").append('<a onclick="tag_search(\''+item+'\');" href="#">'+item+'<span class="num-active">1</span></a>'); } }

    Read the article

  • PHP array performance

    - by dfo
    Hi, this is my first question on Stackoverflow, please bear with me. I'm testing an algorithm for 2d bin packing and I've chosen PHP to mock it up as it's my bread-and-butter language nowadays. As you can see on http://themworks.com/pack_v0.2/oopack.php?ol=1 it works pretty well, but you need to wait around 10-20 seconds for 100 rectangles to pack. For some hard to handle sets it would hit the php's 30s runtime limit. I did some profiling and it shows that most of the time my script goes through different parts of a small 2d array with 0's and 1's in it. It either checks if certain cell equals to 0/1 or sets it to 0/1. It can do such operations million times and each times it takes few microseconds. I guess I could use an array of booleans in a statically typed language and things would be faster. Or even make an array of 1 bit values. I'm thinking of converting the whole thing to some compiled language. Is PHP just not good for it? If I do need to convert it to let's say C++, how good are the automatic converters? My script is just a lot of for loops with basic arrays and objects manipulations. Thank you! Edit. This function gets called more than any other. It reads few properties of a very simple object, and goes through a very small part of a smallish array to check if there's any element not equal to 0. function fits($bin, $file, $x, $y) { $flag = true; $xw = $x + $file->get_width();; $yh = $y + $file->get_height(); for ($i = $x; $i < $xw; $i++) { for ($j = $y; $j < $yh; $j++) { if ($bin[$i][$j] !== 0) { $flag = false; break; } } if (!$flag) break; } return $flag; }

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >