Search Results

Search found 2942 results on 118 pages for 'linked'.

Page 106/118 | < Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >

  • Autoconf -- building with static library (newbie)

    - by EB
    I am trying to migrate my application from manual build to autoconf, which is working very nicely so far. But I have one static library that I can't figure out how to integrate. That library will NOT be located in the usual library locations - the location of the binary (.a file) and header (.h file) will be given as a configure argument. (Notably, even if I move the .a file to /usr/lib or anywhere else I can think of, it still won't work.) It is also not named traditionally (it does not start with "lib" or "l"). Manual compilation is working with these (directory is not predictable - this is just an example): gcc ... -I/home/john/mystuff /home/john/mystuff/helper.a (Uh, I actually don't understand why the .a file is referenced directly, not with -L or anything. Yes, I have a half-baked understanding of building C programs.) So, in my configure.ac, I can use the relevant configure argument to successfully find the header (.h file) using AC_CHECK_HEADER. Inside the AC_CHECK_HEADER I then add the location to CPFLAGS and the #include of the header file in the actual C code picks it up nicely. Given a configure argument that has been put into $location and the name of the needed files are helper.h and helper.a (which are both in the same directory), here is what works so far: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"]) Where I run into difficulties is getting the binary (.a file) linked in. No matter what I try, I always get an error about undefined references to the function calls for that library. I'm pretty sure it's a linkage issue, because I can fuss with the C code and make an intentional error in the function calls to that library which produces earlier errors that indicate that the function prototypes have been loaded and used to compile. I tried adding the location that contains the .a file to LDFLAGS and then doing a AC_CHECK_LIB but it is not found. Maybe my syntax is wrong, or maybe I'm missing something more fundamental, which would not be surprising since I'm a newbie and don't really know what I'm doing. Here is what I have tried: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location"; AC_CHECK_LIB(helper)]) No dice. AC_CHECK_LIB is looking for -lhelper I guess (or libhelper?) so I'm not sure if that's a problem, so I tried this, too (omit AC_CHECK_LIB and include the .a directly in LDFLAGS), without luck: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS -L$location/helper.a"]) To emulate the manual compilation, I tried removing the -L but that doesn't help: AC_CHECK_HEADER([$location/helper.h], [AC_DEFINE([HAVE_HELPER_H], [1], [found helper.h]) CFLAGS="$CFLAGS -I$location"; LDFLAGS="$LDFLAGS $location/helper.a"]) I tried other combinations and permutations, but I think I might be missing something more fundamental....

    Read the article

  • How would you implement this "WorkerChain" functionality in .NET?

    - by Dan Tao
    Sorry for the vague question title -- not sure how to encapsulate what I'm asking below succinctly. (If someone with editing privileges can think of a more descriptive title, feel free to change it.) The behavior I need is this. I am envisioning a worker class that accepts a single delegate task in its constructor (for simplicity, I would make it immutable -- no more tasks can be added after instantiation). I'll call this task T. The class should have a simple method, something like GetToWork, that will exhibit this behavior: If the worker is not currently running T, then it will start doing so right now. If the worker is currently running T, then once it is finished, it will start T again immediately. GetToWork can be called any number of times while the worker is running T; the simple rule is that, during any execution of T, if GetToWork was called at least once, T will run again upon completion (and then if GetToWork is called while T is running that time, it will repeat itself again, etc.). Now, this is pretty straightforward with a boolean switch. But this class needs to be thread-safe, by which I mean, steps 1 and 2 above need to comprise atomic operations (at least I think they do). There is an added layer of complexity. I have need of a "worker chain" class that will consist of many of these workers linked together. As soon as the first worker completes, it essentially calls GetToWork on the worker after it; meanwhile, if its own GetToWork has been called, it restarts itself as well. Logically calling GetToWork on the chain is essentially the same as calling GetToWork on the first worker in the chain (I would fully intend that the chain's workers not be publicly accessible). One way to imagine how this hypothetical "worker chain" would behave is by comparing it to a team in a relay race. Suppose there are four runners, W1 through W4, and let the chain be called C. If I call C.StartWork(), what should happen is this: If W1 is at his starting point (i.e., doing nothing), he will start running towards W2. If W1 is already running towards W2 (i.e., executing his task), then once he reaches W2, he will signal to W2 to get started, immediately return to his starting point and, since StartWork has been called, start running towards W2 again. When W1 reaches W2's starting point, he'll immediately return to his own starting point. If W2 is just sitting around, he'll start running immediately towards W3. If W2 is already off running towards W3, then W2 will simply go again once he's reached W3 and returned to his starting point. The above is probably a little convoluted and written out poorly. But hopefully you get the basic idea. Obviously, these workers will be running on their own threads. Also, I guess it's possible this functionality already exists somewhere? If that's the case, definitely let me know!

    Read the article

  • SEO Help with Pages Indexed by Google

    - by Joe Majewski
    I'm working on optimizing my site for Google's search engine, and lately I've noticed that when doing a "site:www.joemajewski.com" query, I get results for pages that shouldn't be indexed at all. Let's take a look at this page, for example: http://www.joemajewski.com/wow/profile.php?id=3 I created my own CMS, and this is simply a breakdown of user id #3's statistics, which I noticed is indexed by Google, although it shouldn't be. I understand that it takes some time before Google's results reflect accurately on my site's content, but this has been improperly indexed for nearly six months now. Here are the precautions that I have taken: My robots.txt file has a line like this: Disallow: /wow/profile.php* When running the url through Google Webmaster Tools, it indicates that I did, indeed, correctly create the disallow command. It did state, however, that a page that doesn't get crawled may still get displayed in the search results if it's being linked to. Thus, I took one more precaution. In the source code I included the following meta data: <meta name="robots" content="noindex,follow" /> I am assuming that follow means to use the page when calculating PageRank, etc, and the noindex tells Google to not display the page in the search results. This page, profile.php, is used to take the $_GET['id'] and find the corresponding registered user. It displays a bit of information about that user, but is in no way relevant enough to warrant a display in the search results, so that is why I am trying to stop Google from indexing it. This is not the only page Google is indexing that I would like removed. I also have a WordPress blog, and there are many category pages, tag pages, and archive pages that I would like removed, and am doing the same procedures to attempt to remove them. Can someone explain how to get pages removed from Google's search results, and possibly some criteria that should help determine what types of pages that I don't want indexed. In terms of my WordPress blog, the only pages that I truly want indexed are my articles. Everything else I have tried to block, with little luck from Google. Can someone also explain why it's bad to have pages indexed that don't provide any new or relevant content, such as pages for WordPress tags or categories, which are clearly never going to receive traffic from Google. Thanks!

    Read the article

  • How do I use data from the main window in a sub-window?

    - by eagle
    I've just started working on a photo viewer type desktop AIR app with Flex. From the main window I can launch sub-windows, but in these sub-windows I can't seem to access the data I collected in the main window. How can I access this data? Or, how can I send this data to the sub-window on creation? It doesn't need to be dynamically linked. myMain.mxml <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="260" height="200" title="myMain"> <fx:Declarations> </fx:Declarations> <fx:Script> <![CDATA[ public function openWin():void { new myWindow().open(); } public var myData:Array = new Array('The Eiffel Tower','Paris','John Doe'); ]]> </fx:Script> <s:Button x="10" y="10" width="240" label="open a sub-window" click="openWin();"/> </s:WindowedApplication> myWindow.mxml <?xml version="1.0" encoding="utf-8"?> <mx:Window name="myWindow" title="myWindow" xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute" width="640" height="360"> <mx:Script> <![CDATA[ ]]> </mx:Script> <mx:Label id="comment" x="10" y="10" text=""/> <mx:Label id="location" x="10" y="30" text=""/> <mx:Label id="author" x="10" y="50" text=""/> </mx:Window> I realize this might be a very easy question but I have searched the web, read and watched tutorials on random AIR subjects for a few days and couldn't find it. The risk of looking like a fool is worth it now, I want to get on with my first app!

    Read the article

  • Design patterns and interview question

    - by user160758
    When I was learning to code, I read up on the design patterns like a good boy. Long after this, I started to actually understand them. Design discussions such as those on this site constantly try to make the rules more and more general, which is good. But there is a line, over which it becomes over-analysis starts to feed off itself and as such I think begins to obfuscate the original point - for example the "What's Alternative to Singleton" post and the links contained therein. http://stackoverflow.com/questions/1300655/whats-alternative-to-singleton I say this having been asked in both interviews I’ve had over the last 2 weeks what a singleton is and what criticisms I have of it. I have used it a few times for items such as user data (simple key-value eg. last file opened by this user) and logging (very common i'm sure). I've never ever used it just to have what is essentially global application data, as this is clearly stupid. In the first interview, I reply that I have no criticisms of it. He seemed disappointed by this but as the job wasn’t really for me, I forgot about it. In the next one, I was asked again and, as I wanted this job, I thought about it on the spot and made some objections, similar to those contained in the post linked to above (I suggested use of a factory or dependency injection instead). He seemed happy with this. But my problem is that I have used the singleton without ever using it in this kind of stupid way, which I had to describe on the spot. Using it for global data and the like isn’t something I did then realised was stupid, or read was stupid so didn’t do, it was just something I knew was stupid from the start. Essentially I’m supposed to be able to think of ways of how to misuse a pattern in the interview? Which class of programmers can best answer this question? The best ones? The medium ones? I'm not sure.... And these were both bright guys. I read more than enough to get better at my job but had never actually bothered to seek out criticisms of the most simple of the design patterns like this one. Do people think such questions are valid and that I ought to know the objections off by heart? Or that it is reasonable to be able to work out what other people who are missing the point would do on the fly? Or do you think I’m at least partially right that the question is too unsubtle and that the questions ought to be better thought out in order to make sure only good candidates can answer. PS. Please don’t think I’m saying that I’m just so clever that I know everything automatically - I’ve learnt the hard way like everyone else. But avoiding global data is hardly revolutionary.

    Read the article

  • .live event doesnt work till second click

    - by ChampionChris
    I have 2 list on a page that are linked. When I drag a li element from list 1 to list 2 the live events on list 1 don't work on the first click only second click. Below is the code that adds the li (obj) to list 2. function AddToDropBox(obj) { $(obj).children(".handle").animate({ width: "20px" }).children("strong").fadeOut(); $(obj).children("span:not(.track,.play,.handle,:has(.btn-edit))").fadeOut('fast'); $(obj).children(".play").css("margin-right", "8px"); $(obj).css({ "opacity": "0.0", "width": "284px" }).animate({ opacity: "1.0" }); if ($(".sidebar-drop-box ul").children(".admin-song").length > 0) { $(".dropTitle").fadeOut("fast"); $(".sidebar-drop-box ul.admin-song-list").css("min-height", "0"); } if (typeof SetLinks == 'function') { SetLinks(); } //CBG Changes adds media ID to hidden field //checks id there is a value in field then adds comma if(document.getElementById("ctl00_cphBody_hfRemoveMedia").value==""||document.getElementById("ctl00_cphBody_hfRemoveMedia").value==null) { document.getElementById("ctl00_cphBody_hfRemoveMedia").value=(obj).attr("mediaid"); } else { var localMediaIDs=document.getElementById("ctl00_cphBody_hfRemoveMedia").value; document.getElementById("ctl00_cphBody_hfRemoveMedia").value=localMediaIDs+", "+(obj).attr("mediaid"); } // alert("hfid: "+document.getElementById("ctl00_cphBody_hfRemoveMedia").value); //END CBG Modifications } this is one of the live() events that dont fire until the second click after the drag. This live() event is in a document.ready function(). // Live for deleting. $(".btn-del").live("click", function(e) { DeleteItem(this); $(this).removeClass("btn-del").addClass("btn-add").parents("li").removeClass("alt").addClass("removed"); var oldTxt = $(this).parents("li").find(".status").text(); $(this).parents("li").find(".status").text("Removed").attr("oldstat", oldTxt); $("#timeHolder input[type=hidden]").val(($("#timeHolder input[type=hidden]").val() * 1) - ($(this).parents("li").find(".time").attr("length") * 1)); CalculateAggregates(); isDirty = false; }); EDIT @dreaton.. Im new to jquery and javascript so thanks for the last tip... Im not sure what you mean about cache the query's. ... the delegete feature is giving me this Microsoft JScript runtime error: Object doesn't support this property or method this is the way I have the code $('#ulPlaylist').delegate('.btn-del', 'click', function (e) { DeleteItem(this); $(this).removeClass("btn-del").addClass("btn-add").parents("li").removeClass("alt").addClass("removed"); var oldTxt = $(this).parents("li").find(".status").text(); $(this).parents("li").find(".status").text("Removed").attr("oldstat", oldTxt); $("#timeHolder input[type=hidden]").val(($("#timeHolder input[type=hidden]").val() * 1) - ($(this).parents("li").find(".time").attr("length") * 1)); CalculateAggregates(); isDirty = false; });

    Read the article

  • Running out of memory.. How?

    - by maxdj
    I'm attempting to write a solver for a particular puzzle. It tries to find a solution by trying every possible move one at a time until it finds a solution. The first version tried to solve it depth-first by continually trying moves until it failed, then backtracking, but this turned out to be too slow. I have rewritten it to be breadth-first using a queue structure, but I'm having problems with memory management. Here are the relevant parts: int main(int argc, char *argv[]) { ... int solved = 0; do { solved = solver(queue); } while (!solved && !pblListIsEmpty(queue)); ... } int solver(PblList *queue) { state_t *state = (state_t *) pblListPoll(queue); if (is_solution(state->pucks)) { print_solution(state); return 1; } state_t *state_cp; puck new_location; for (int p = 0; p < puck_count; p++) { for (dir i = NORTH; i <= WEST; i++) { if (!rules(state->pucks, p, i)) continue; new_location = in_dir(state->pucks, p, i); if (new_location.x != -1) { state_cp = (state_t *) malloc(sizeof(state_t)); state_cp->move.from = state->pucks[p]; state_cp->move.direction = i; state_cp->prev = state; state_cp->pucks = (puck *) malloc (puck_count * sizeof(puck)); memcpy(state_cp->pucks, state->pucks, puck_count * sizeof(puck)); /*CRASH*/ state_cp->pucks[p] = new_location; pblListPush(queue, state_cp); } } } return 0; } When I run it I get the error: ice(90175) malloc: *** mmap(size=2097152) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Bus error The error happens around iteration 93,000. From what I can tell, the error message is from malloc failing, and the bus error is from the memcpy after it. I have a hard time believing that I'm running out of memory, since each game state is only ~400 bytes. Yet that does seem to be what's happening, seeing as the activity monitor reports that it is using 3.99GB before it crashes. I'm using http://www.mission-base.com/peter/source/ for the queue structure (it's a linked list). Clearly I'm doing something dumb. Any suggestions?

    Read the article

  • What's the C strategy to "imitate" a C++ template ?

    - by Andrei Ciobanu
    After reading some examples on stackoverflow, and following some of the answers for my previous questions (1), I've eventually come with a "strategy" for this. I've come to this: 1) Have a declare section in the .h file. Here I will define the data-structure, and the accesing interface. Eg.: /** * LIST DECLARATION. (DOUBLE LINKED LIST) */ #define NM_TEMPLATE_DECLARE_LIST(type) \ typedef struct nm_list_elem_##type##_s { \ type data; \ struct nm_list_elem_##type##_s *next; \ struct nm_list_elem_##type##_s *prev; \ } nm_list_elem_##type ; \ typedef struct nm_list_##type##_s { \ unsigned int size; \ nm_list_elem_##type *head; \ nm_list_elem_##type *tail; \ int (*cmp)(const type e1, const type e2); \ } nm_list_##type ; \ \ nm_list_##type *nm_list_new_##type##_(int (*cmp)(const type e1, \ const type e2)); \ \ (...other functions ...) 2) Wrap the functions in the interface inside MACROS: /** * LIST INTERFACE */ #define nm_list(type) \ nm_list_##type #define nm_list_elem(type) \ nm_list_elem_##type #define nm_list_new(type,cmp) \ nm_list_new_##type##_(cmp) #define nm_list_delete(type, list, dst) \ nm_list_delete_##type##_(list, dst) #define nm_list_ins_next(type,list, elem, data) \ nm_list_ins_next_##type##_(list, elem, data) (...others...) 3) Implement the functions: /** * LIST FUNCTION DEFINITIONS */ #define NM_TEMPLATE_DEFINE_LIST(type) \ nm_list_##type *nm_list_new_##type##_(int (*cmp)(const type e1, \ const type e2)) \ {\ nm_list_##type *list = NULL; \ list = nm_alloc(sizeof(*list)); \ list->size = 0; \ list->head = NULL; \ list->tail = NULL; \ list->cmp = cmp; \ }\ void nm_list_delete_##type##_(nm_list_##type *list, \ void (*destructor)(nm_list_elem_##type elem)) \ { \ type data; \ while(nm_list_size(list)){ \ data = nm_list_rem_##type(list, tail); \ if(destructor){ \ destructor(data); \ } \ } \ nm_free(list); \ } \ (...others...) In order to use those constructs, I have to create two files (let's call them templates.c and templates.h) . In templates.h I will have to NM_TEMPLATE_DECLARE_LIST(int), NM_TEMPLATE_DECLARE_LIST(double) , while in templates.c I will need to NM_TEMPLATE_DEFINE_LIST(int) , NM_TEMPLATE_DEFINE_LIST(double) , in order to have the code behind a list of ints, doubles and so on, generated. By following this strategy I will have to keep all my "template" declarations in two files, and in the same time, I will need to include templates.h whenever I need the data structures. It's a very "centralized" solution. Do you know other strategy in order to "imitate" (at some point) templates in C++ ? Do you know a way to improve this strategy, in order to keep things in more decentralized manner, so that I won't need the two files: templates.c and templates.h ?

    Read the article

  • How do I solve "405 Method Not Allowed" for our subversion setup?

    - by macke
    We're serving our source code using VisualSVN running on Windows Server 2003. Recently, we split a portion of a project into a new project in it's own repository, and then linked it back to the original project using svn:externals. Since then, we've been having issues when we try to commit files with Subclipse. The error we're getting is: svn: Commit failed (details follow): svn: PROPFIND of '/svn': 405 Method Not Allowed (https://svn.ourserver.com) Googling for a while didn't really help, our config seems to be correct. It should also be noted that we've been running this server for a while no without these problems and apart from splitting the project into two repositories, no changes have been made to the server (ie, config files are the same). It should also be noted that these errors only appear when we try to check in multiple files at once. If we check in one file at a time there are no errors. Also, it only appears in Subclipse as far as we know right now, Versions.app (OS X) seems to work fine so that is our current workaround. So, the questions is how do I analyze the error to find the cause and subsequently fix it? I'm by no means a svn guru and right now I'm clueless. EDIT: It seems we can check in multiple files in the same package, but not files from multiple packages. Also, when I "split" the project into two repositories, I imported the original repository with a new name. I did not do a dump and then import that dump. Could that be the source of our issues, and if so, how would I solve that? EDIT: After some jerking around it seems as though it is indeed related to when checking in files in different repositories. If I try to do a single commit in both Repo A and Repo B (referenced by svn:externals) at the same time, I get the error. Versions.app handles this correctly, but I guess it might just be doing two commits, not a single one. Subclipse fails miserably. For now, we simply do multiple commits, one for Repo A and one for Repo B, that works just fine. If anyone smarter than me could fill in the details why this is happening, whether or not this kind of setup is stupid etc, please go right ahead.

    Read the article

  • Loading the last related record instantly for multiple parent records using Entity framework

    - by Guillaume Schuermans
    Does anyone know a good approach using Entity Framework for the problem described below? I am trying for our next release to come up with a performant way to show the placed orders for the logged on customer. Of course paging is always a good technique to use when a lot of data is available I would like to see an answer without any paging techniques. Here's the story: a customer places an order which gets an orderstatus = PENDING. Depending on some strategy we move that order up the chain in order to get it APPROVED. Every change of status is logged so we can see a trace for statusses and maybe even an extra line of comment per status which can provide some extra valuable information to whoever sees this order in an interface. So an Order is linked to a Customer. One order can have multiple orderstatusses stored in OrderStatusHistory. In my testscenario I am using a customer which has 100+ Orders each with about 5 records in the OrderStatusHistory-table. I would for now like to see all orders in one page not using paging where for each Order I show the last relevant Status and the extra comment (if there is any for this last status; both fields coming from OrderStatusHistory; the record with the highest Id for the given OrderId). There are multiple scenarios I have tried, but I would like to see any potential other solutions or comments on the things I have already tried. Trying to do Include() when getting Orders but this still results in multiple queries launched on the database. Each order triggers an extra query to the database to get all orderstatusses in the history table. So all statusses are queried here instead of just returning the last relevant one, plus 100 extra queries are launched for 100 orders. You can imagine the problem when there are 100000+ orders in the database. Having 2 computed columns on the database: LastStatus, LastStatusInformation and a regular Linq-Query which gets those columns which are available through the Entity-model. The problem with this approach is the fact that those computed columns are determined using a scalar function which can not be changed without removing the formula from the computed column, etc... In the end I am very familiar with SQL and Stored procedures, but since the rest of the data-layer uses Entity Framework I would like to stick to it as long as possible, even though I have my doubts about performance. Using the SQL approach I would write something like this: WITH cte (RN, OrderId, [Status], Information) AS ( SELECT ROW_NUMBER() OVER (PARTITION BY OrderId ORDER BY Id DESC), OrderId, [Status], Information FROM OrderStatus ) SELECT o.Id, cte.[Status], cte.Information AS StatusInformation, o.* FROM [Order] o INNER JOIN cte ON o.Id = cte.OrderId AND cte.RN = 1 WHERE CustomerId = @CustomerId ORDER BY 1 DESC; which returns all orders for the customer with the statusinformation provided by the Common Table Expression. Does anyone know a good approach using Entity Framework?

    Read the article

  • Linking two models in a multi-model form

    - by Elliot
    Hey Guys, I have a nested multimodel form right now, using Users and Profiles. Users has_one profile, and Profile belongs_to Users. When the form is submitted, a new user is created, and a new profile is created, but they are not linked (this is the first obvious issue). The user's model has a profile_id row, and the profile's model has a user_id row. Here is the code for the form: <%= form_for(@user, :url => teams_path) do |f| %> <p><%= f.label :email %><br /> <%= f.text_field :email %></p> <p><%= f.label :password %><br /> <%= f.password_field :password %></p> <p><%= f.label :password_confirmation %><br /> <%= f.password_field :password_confirmation %></p> <%= f.hidden_field :role_id, :value => @role.id %></p> <%= f.hidden_field :company_id, :value => current_user.company_id %></p> <%= fields_for @user.profile do |profile_fields| %> <div class="field"> <%= profile_fields.label :first_name %><br /> <%= profile_fields.text_field :first_name %> </div> <div class="field"> <%= profile_fields.label :last_name %><br /> <%= profile_fields.text_field :last_name %> </div> <% end %> <p><%= f.submit "Sign up" %></p> <% end %> A second issue, is even though the username, and password are successfully created through the form for the user model, the hidden fields (role_id & company_id - which are also links to other models) are not created (even though they are part of the model) - the values are successfully shown in the HTML for those fields however. Any help would be great!

    Read the article

  • L-Soft LISTSERV TCPGUI Interface for PHP Creation

    - by poolnoodl
    I'm trying to use LISTSERV's "API" in PHP. L-Soft calls this TCPGUI, and essentially, you can request data like over Telnet. To do this, I'm using PHP's TCP socket functions. I've seen this done in other languages but can't quite convert it to PHP. I can connect, I can change set ASCII or BINARY mode. But I can never quite craft the header packet the way I need to authenticate, so I'm thinking I'm messing up my conversion. C: http://www.lsoft.com/manuals/16.0/htmlhelp/advanced%20topics/TCPGUI.html#2334328 $origin = '[email protected]'; $pwd = 'password'; $host = "example.com"; $port = 2306; $email = "[email protected]"; $list = "mailinglist"; $command = "Query $list FOR $email"; $fp = stream_socket_client("tcp://$host:$port", $errno, $errstr, 30); $cmd = $command . " PW=" . $pwd; $len = strlen($cmd); $orglen = strlen($origin); $n = $len + $orglen + 1; $headerPacket[0] = "1"; $headerPacket[1] = "B"; $headerPacket[2] = "\r"; $headerPacket[3] = "\n"; $headerPacket[4] = ord($n / 256); $headerPacket[5] = ord($n + 255); $headerPacket[6] = ord($orglen); for ($i = 0; $i < $orglen; $i++) { $headerPacket[$i + 7] = ord($origin[$i]); } for ($i = 0; $i < $len; $i++) { $cmdPacket[$i] = ord($cmd[$i]); } fwrite($fp, implode($headerPacket)); while (!feof($fp)) { echo fgets($fp, 1024); } Any thoughts on where I'm going wrong? I'd much appreciate it if anyone could point me toward some code to do this, days of googling and searching here on SO has only lead me to examples in other languages. Of course, if you know C (or Java or Perl as linked below in my comment to bypass the spam filter), PHP, and socket programming fairly well, you could probably rewrite the whole of the code in an hour, maybe a few minutes. You'd have my eternal thanks for that.

    Read the article

  • How to interpret kernel panics?

    - by Owen
    Hi all, I'm new to linux kernel and could barely understand how to debug kernel panics. I have this error below and I don't know where in the C code should I start checking. I was thinking maybe I could echo what functions are being called so I could check where in the code is this null pointer dereferenced. What print function should I use ? How do you interpret the error message below? Unable to handle kernel NULL pointer dereference at virtual address 0000000d pgd = c7bdc000 [0000000d] *pgd=4785f031, *pte=00000000, *ppte=00000000 Internal error: Oops: 17 [#1] PREEMPT Modules linked in: bcm5892_secdom_fw(P) bcm5892_lcd snd_bcm5892 msr bcm5892_sci bcm589x_ohci_p12 bcm5892_skeypad hx_decoder(P) pinnacle hx_memalloc(P) bcm_udc_dwc scsi_mod g_serial sd_mod usb_storage CPU: 0 Tainted: P (2.6.27.39-WR3.0.2ax_standard #1) PC is at __kmalloc+0x70/0xdc LR is at __kmalloc+0x48/0xdc pc : [c0098cc8] lr : [c0098ca0] psr: 20000093 sp : c7a9fd50 ip : c03a4378 fp : c7a9fd7c r10: bf0708b4 r9 : c7a9e000 r8 : 00000040 r7 : bf06d03c r6 : 00000020 r5 : a0000093 r4 : 0000000d r3 : 00000000 r2 : 00000094 r1 : 00000020 r0 : c03a4378 Flags: nzCv IRQs off FIQs on Mode SVC_32 ISA ARM Segment user Control: 00c5387d Table: 47bdc008 DAC: 00000015 Process sh (pid: 1088, stack limit = 0xc7a9e260) Stack: (0xc7a9fd50 to 0xc7aa0000) fd40: c7a6a1d0 00000020 c7a9fd7c c7ba8fc0 fd60: 00000040 c7a6a1d0 00000020 c71598c0 c7a9fd9c c7a9fd80 bf06d03c c0098c64 fd80: c71598c0 00000003 c7a6a1d0 bf06c83c c7a9fdbc c7a9fda0 bf06d098 bf06d008 fda0: c7159880 00000000 c7a6a2d8 c7159898 c7a9fde4 c7a9fdc0 bf06d130 bf06d078 fdc0: c79ca000 c7159880 00000000 00000000 c7afbc00 c7a9e000 c7a9fe0c c7a9fde8 fde0: bf06d4b4 bf06d0f0 00000000 c79fd280 00000000 0f700000 c7a9e000 00000241 fe00: c7a9fe3c c7a9fe10 c01c37b4 bf06d300 00000000 c7afbc00 00000000 00000000 fe20: c79cba84 c7463c78 c79fd280 c7473b00 c7a9fe6c c7a9fe40 c00a184c c01c35e4 fe40: 00000000 c7bb0005 c7a9fe64 c79fd280 c7463c78 00000000 c00a1640 c785e380 fe60: c7a9fe94 c7a9fe70 c009c438 c00a164c c79fd280 c7a9fed8 c7a9fed8 00000003 fe80: 00000242 00000000 c7a9feb4 c7a9fe98 c009c614 c009c2a4 00000000 c7a9fed8 fea0: c7a9fed8 00000000 c7a9ff64 c7a9feb8 c00aa6bc c009c5e8 00000242 000001b6 fec0: 000001b6 00000241 00000022 00000000 00000000 c7a9fee0 c785e380 c7473b00 fee0: d8666b0d 00000006 c7bb0005 00000300 00000000 00000000 00000001 40002000 ff00: c7a9ff70 c79b10a0 c79b10a0 00005402 00000003 c78d69c0 ffffff9c 00000242 ff20: 000001b6 c79fd280 c7a9ff64 c7a9ff38 c785e380 c7473b00 00000000 00000241 ff40: 000001b6 ffffff9c 00000003 c7bb0000 c7a9e000 00000000 c7a9ff94 c7a9ff68 ff60: c009c128 c00aa380 4d18b5f0 08000000 00000000 00071214 0007128c 00071214 ff80: 00000005 c0027ee4 c7a9ffa4 c7a9ff98 c009c274 c009c0d8 00000000 c7a9ffa8 ffa0: c0027d40 c009c25c 00071214 0007128c 0007128c 00000241 000001b6 00000000 ffc0: 00071214 0007128c 00071214 00000005 00073580 00000003 000713e0 400010d0 ffe0: 00000001 bef0c7b8 000269cc 4d214fec 60000010 0007128c 00000000 00000000 Backtrace: [] (__kmalloc+0x0/0xdc) from [] (gs_alloc_req+0x40/0x70 [g_serial]) r8:c71598c0 r7:00000020 r6:c7a6a1d0 r5:00000040 r4:c7ba8fc0 [] (gs_alloc_req+0x0/0x70 [g_serial]) from [] (gs_alloc_requests+0x2c/0x78 [g_serial]) r7:bf06c83c r6:c7a6a1d0 r5:00000003 r4:c71598c0 [] (gs_alloc_requests+0x0/0x78 [g_serial]) from [] (gs_start_io+0x4c/0xac [g_serial]) r7:c7159898 r6:c7a6a2d8 r5:00000000 r4:c7159880 [] (gs_start_io+0x0/0xac [g_serial]) from [] (gs_open+0x1c0/0x224 [g_serial]) r9:c7a9e000 r8:c7afbc00 r7:00000000 r6:00000000 r5:c7159880 r4:c79ca000 [] (gs_open+0x0/0x224 [g_serial]) from [] (tty_open+0x1dc/0x314) [] (tty_open+0x0/0x314) from [] (chrdev_open+0x20c/0x22c) [] (chrdev_open+0x0/0x22c) from [] (__dentry_open+0x1a0/0x2b8) r8:c785e380 r7:c00a1640 r6:00000000 r5:c7463c78 r4:c79fd280 [] (__dentry_open+0x0/0x2b8) from [] (nameidata_to_filp+0x38/0x50) [] (nameidata_to_filp+0x0/0x50) from [] (do_filp_open+0x348/0x6f4) r4:00000000 [] (do_filp_open+0x0/0x6f4) from [] (do_sys_open+0x5c/0x170) [] (do_sys_open+0x0/0x170) from [] (sys_open+0x24/0x28) r8:c0027ee4 r7:00000005 r6:00071214 r5:0007128c r4:00071214 [] (sys_open+0x0/0x28) from [] (ret_fast_syscall+0x0/0x2c) Code: e59c4080 e59c8090 e3540000 159c308c (17943103) ---[ end trace be196e7cee3cb1c9 ]--- note: sh[1088] exited with preempt_count 2 process '-/bin/sh' (pid 1088) exited. Scheduling for restart. Welcome to Wind River Linux

    Read the article

  • select value of td and download content of selected tds

    - by user1272145
    I have this table <table class="results" id="summary_results"> <tr> <td>select all</td> <td>name</td> <td>id</td> <td>address</td> <td>url</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>john doe</td> <td>1</td> <td>33.85 some address</td> <td>http://www.domain.com</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>jane doe</td> <td>2</td> <td>34.85 some address</td> <td>http://www.domain2.com</td> </tr> <tr> <td> <input type="checkbox"> </td> <td>sam</td> <td>3</td> <td>33.86 some address</td> <td>http://www.domain3.com</td> </tr> </table> I would like to select all the rows then download the content of the urls knowing that each url is linked to the id. for example the first url will be www.domain.com?id=1&report=report

    Read the article

  • UIVIewController not released when view is dismissed

    - by Nelson Ko
    I have a main view, mainWindow, which presents a couple of buttons. Both buttons create a new UIViewController (mapViewController), but one will start a game and the other will resume it. Both buttons are linked via StoryBoard to the same View. They are segued to modal views as I'm not using the NavigationController. So in a typical game, if a person starts a game, but then goes back to the main menu, he triggers: [self dismissViewControllerAnimated:YES completion:nil ]; to return to the main menu. I would assume the view controller is released at this point. The user resumes the game with the second button by opening another instance of mapViewController. What is happening, tho, is some touch events will trigger methods on the original instance (and write status updates to them - therefore invisible to the current view). When I put a breakpoint in the mapViewController code, I can see the instance will be one or the other (one of which should be released). I have tried putting a delegate to the mainWindow clearing the view: [self.delegate clearMapView]; where in the mainWindow - (void) clearMapView{ gameWindow = nil; } I have also tried self.view=nil; in the mapViewController. The mapViewController code contains MVC code, where the model is static. I wonder if this may prevent ARC from releasing the view. The model.m contains: static CanShieldModel *sharedInstance; + (CanShieldModel *) sharedModel { @synchronized(self) { if (!sharedInstance) sharedInstance = [[CanShieldModel alloc] init]; return sharedInstance; } return sharedInstance; } Another post which may have a lead, but so far not successful, is UIViewController not being released when popped I have in ViewDidLoad: // checks to see if app goes inactive - saves. [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(resignActive) name:UIApplicationWillResignActiveNotification object:nil]; with the corresponding in ViewDidUnload: [[NSNotificationCenter defaultCenter] removeObserver:self name:UIApplicationWillResignActiveNotification object:nil]; Does anyone have any suggestions? EDIT: - (void) prepareForSegue:(UIStoryboardSegue *)segue sender:(id)sender{ NSString *identifier = segue.identifier; if ([identifier isEqualToString: @"Start Game"]){ gameWindow = (ViewController *)[segue destinationViewController]; gameWindow.newgame=-1; gameWindow.delegate = self; } else if ([identifier isEqualToString: @"Resume Game"]){ gameWindow = (ViewController *)[segue destinationViewController]; gameWindow.newgame=0; gameWindow.delegate = self;

    Read the article

  • MATLAB arbitrary code execution

    - by aristotle2600
    I am writing an automatic grader program under linux. There are several graders written in MATLAB, so I want to tie them all together and let students run a program to do an assignment, and have them choose the assignment. I am using a C++ main program, which then has mcc-compiled MATLAB libraries linked to it. Specifically, my program reads a config file for the names of the various matlab programs, and other information. It then uses that information to present choices to the student. So, If an assignment changes, is added or removed, then all you have to do is change the config file. The idea is that next, the program invokes the correct matlab library that has been compiled with mcc. But, that means that the libraries have to be recompiled if a grader gets changed. Worse, the whole program must be recompiled if a grader is added or removed. So, I would like one, simple, unchanging matlab library function to call the grader m-files directly. I currently have such a library, that uses eval on a string passed to it from the main program. The problem is that when I do this, apparently, mcc absorbs the grader m-code into itself; changing the grader m code after compilation has no effect. I would like for this not to happen. It was brought to my attention that Mathworks may not want me to be able to do this, since it could bypass matlab entirely. That is not my intention, and I would be happy with a solution that requires a full matlab install. My possible solutions are to use a mex file for the main program, or have the main program call a mcc library, which then calls a mex file, which then calls the proper grader. The reason I am hesitant about the first solution is that I'm not sure how many changes I would have to make to my code to make it work; my code is C++, not C, which I think makes things more complicated. The 2nd solution, though, may just be more complicated and ultimately have the same problem. So, any thoughts on this situation? How should I do this?

    Read the article

  • Combining FileStream and MemoryStream to avoid disk accesses/paging while receiving gigabytes of data?

    - by w128
    I'm receiving a file as a stream of byte[] data packets (total size isn't known in advance) that I need to store somewhere before processing it immediately after it's been received (I can't do the processing on the fly). Total received file size can vary from as small as 10 KB to over 4 GB. One option for storing the received data is to use a MemoryStream, i.e. a sequence of MemoryStream.Write(bufferReceived, 0, count) calls to store the received packets. This is very simple, but obviously will result in out of memory exception for large files. An alternative option is to use a FileStream, i.e. FileStream.Write(bufferReceived, 0, count). This way, no out of memory exceptions will occur, but what I'm unsure about is bad performance due to disk writes (which I don't want to occur as long as plenty of memory is still available) - I'd like to avoid disk access as much as possible, but I don't know of a way to control this. I did some testing and most of the time, there seems to be little performance difference between say 10 000 consecutive calls of MemoryStream.Write() vs FileStream.Write(), but a lot seems to depend on buffer size and the total amount of data in question (i.e the number of writes). Obviously, MemoryStream size reallocation is also a factor. Does it make sense to use a combination of MemoryStream and FileStream, i.e. write to memory stream by default, but once the total amount of data received is over e.g. 500 MB, write it to FileStream; then, read in chunks from both streams for processing the received data (first process 500 MB from the MemoryStream, dispose it, then read from FileStream)? Another solution is to use a custom memory stream implementation that doesn't require continuous address space for internal array allocation (i.e. a linked list of memory streams); this way, at least on 64-bit environments, out of memory exceptions should no longer be an issue. Con: extra work, more room for mistakes. So how do FileStream vs MemoryStream read/writes behave in terms of disk access and memory caching, i.e. data size/performance balance. I would expect that as long as enough RAM is available, FileStream would internally read/write from memory (cache) anyway, and virtual memory would take care of the rest. But I don't know how often FileStream will explicitly access a disk when being written to. Any help would be appreciated.

    Read the article

  • mysql - combining columns and tables

    - by Phil Jackson
    Hi, I'm not much of a SQL man so I'm seeking help for this one. I have a site where I have a database for all accounts and whatnot, and another for storing actions that the user has done on the site. Each user has their own table but I want to combine the data of each user group ( all users that are "linked together" ) and order that data in the time the actions took place. Heres what I have; <?php $query = "SELECT `TALKING_TO` FROM `nnn_instant_messaging` WHERE `AUTHOR` = '" . DISPLAY_NAME . "' AND `TALKING_TO` != ''"; $query = mysql_query( $query, $CON ) or die( "_error_ " . mysql_error()); if( mysql_num_rows( $query ) != 0 ) { $table_str = ""; $select_ref_clause = "( "; $select_time_stamp_clause = "( "; while( $row = mysql_fetch_array( $query ) ) { $table_str .= "`actvbiz_networks`.`" . $row['TALKING_TO'] . "`, "; $select_ref_clause .= "`actvbiz_networks`.`" . $row['TALKING_TO'] . ".REF`, "; $select_time_stamp_clause .= "`actvbiz_networks`.`" . $row['TALKING_TO'] . ".TIME_STAMP`, "; } $table_str = $table_str . "`actvbiz_networks`.`" . DISPLAY_NAME . "`"; $select_ref_clause = substr($select_ref_clause, 0, -2) . ") AS `REF`, "; $select_time_stamp_clause = substr($select_time_stamp_clause, 0, -2) . " ) AS `TIME_STAMP`"; }else{ $table_str = "`actvbiz_networks`.`" . DISPLAY_NAME . "`"; $select_ref_clause = "`REF`, "; $select_time_stamp_clause = "`TIME_STAMP`"; } $where_clause = $select_ref_clause . $select_time_stamp_clause; $query = "SELECT " . $where_clause . " FROM " . $table_str . " ORDER BY TIME_STAMP"; die($query); $query = mysql_query( $query, $CON ) or die( "_error_ " . mysql_error()); if( mysql_num_rows( $query ) != 0 ) { }else{ ?> <p>Currently no actions have taken place in your network.</p> <?php } ?> The code above returns the sql statement: SELECT ( `actvbiz_networks`.`john_doe.REF`, `actvbiz_networks`.`Emmalene_Jackson.REF`) AS `REF`, ( `actvbiz_networks`.`john_doe.TIME_STAMP`, `actvbiz_networks`.`Emmalene_Jackson.TIME_STAMP` ) AS `TIME_STAMP` FROM `actvbiz_networks`.`john_doe`, `actvbiz_networks`.`Emmalene_Jackson`, `actvbiz_networks`.`act_web_designs` ORDER BY TIME_STAMP I really am learning on my feet with SQL. Its not the PHP I have a problem with ( I can quite happly code away with PHP ) I'ts just help with the SQL statement. Any help much appreciated, REgards, Phil

    Read the article

  • What can cause System.Move to occasionaly give wrong results?

    - by Fredrik Loftheim
    The last few days we have had some strange problems with our database components developed by a third party. There has been no changes to these components for months. The code that HAS changed the last few days is our own code and we have also updated our gui-components developed by another third party. After debugging I have found that a call to System.Move in one of the database component procedures occationaly gives wrong results! Please take a look at the code below from the database components and read my comments. How can this inconsistent behaviour happen? Can anyone give me an idea of how to procede to find the cause of this inconsistent behaviour? NB! I dont think there is anything wrong with THIS code, it is only shown to explain the problem "symptoms". My guess is that there is some sort of memory corruption or something, caused by our code or the updated gui-component-code. Edit: Take a look at the blogpost linked below. It seems that it could be related to my problem. At least as I read it it confirms that System.Move can give wrong results: http://blog.excastle.com/2007/08/28/delphi-bug-of-the-day-fpu-stack-leak/ Procedure InternalDescribe; var cbufl: sb4; //sb4=LongInt cbuf: array[0..30] of char; cbufp: PChar; //.... begin //..Some code repeat //...Some code to initialize cbufp and cbufl //On the 15. iteration the values immediately Before Move are always these: //cbufp = 'STDPRODUCTSTOREDELEMENTSCOUNT' //cbuf = ('S', 'T', 'A', 'T', 'U', 'S', #0, 'E', 'V', 'A', 'R', 'R', 'E', 'C', 'I', 'D', #0, 'D', 'U', 'C', 'T', 'I', 'D', #0, #0, #0, #0, #0, #0, #0, #0) //cbufl = 29 Move(cbufp^, cbuf, cbufl); //Values immediately After Move should then be: //cbuf = ('S', 'T', 'D', 'P', 'R', 'O', 'D', 'U', 'C', 'T', 'S', 'T', 'O', 'R', 'E', 'D', 'E', 'L', 'E', 'M', 'E', 'N', 'T', 'S', 'C', 'O', 'U', 'N', 'T', #0, #0) //But sometimes this Move results in this value( 1 in 5..15 times): //cbuf = ('S', 'T', 'D', 'P', 'R', 'O', 'D', 'U', 'C', 'T', 'S', 'T', 'O', 'R', 'E', 'D', #0, #0, #0, #0, #0, 'N', 'T', 'S', 'C', 'O', 'U', 'N', 'T', #0, #0) } until SomeCondition; //...Some more code end;

    Read the article

  • CultureManager issue

    - by Serge
    I have a bug I don't understand. While the following works fine: Resources.Classes.AFieldFormula.DirectFieldFormula this one throws an exception: new ResourceManager(typeof(Resources.Classes.AFieldFormula)).GetString("DirectFieldFormula"); Could not find any resources appropriate for the specified culture or the neutral culture. Make sure \"Resources.Classes.AFieldFormula.resources\" was correctly embedded or linked into assembly \"MygLogWeb\" at compile time, or that all the satellite assemblies required are loadable and fully signed. How comes? Resource designer.cs file: //------------------------------------------------------------------------------ // <auto-generated> // This code was generated by a tool. // Runtime Version:4.0.30319.18408 // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ namespace Resources.Classes { using System; /// <summary> /// A strongly-typed resource class, for looking up localized strings, etc. /// </summary> // This class was auto-generated by the StronglyTypedResourceBuilder // class via a tool like ResGen or Visual Studio. // To add or remove a member, edit your .ResX file then rerun ResGen // with the /str option, or rebuild your VS project. [global::System.CodeDom.Compiler.GeneratedCodeAttribute("System.Resources.Tools.StronglyTypedResourceBuilder", "4.0.0.0")] [global::System.Diagnostics.DebuggerNonUserCodeAttribute()] [global::System.Runtime.CompilerServices.CompilerGeneratedAttribute()] public class AFieldFormula { private static global::System.Resources.ResourceManager resourceMan; private static global::System.Globalization.CultureInfo resourceCulture; [global::System.Diagnostics.CodeAnalysis.SuppressMessageAttribute("Microsoft.Performance", "CA1811:AvoidUncalledPrivateCode")] internal AFieldFormula() { } /// <summary> /// Returns the cached ResourceManager instance used by this class. /// </summary> [global::System.ComponentModel.EditorBrowsableAttribute(global::System.ComponentModel.EditorBrowsableState.Advanced)] public static global::System.Resources.ResourceManager ResourceManager { get { if (object.ReferenceEquals(resourceMan, null)) { global::System.Resources.ResourceManager temp = new global::System.Resources.ResourceManager("MygLogWeb.Classes.AFieldFormula", typeof(AFieldFormula).Assembly); resourceMan = temp; } return resourceMan; } } /// <summary> /// Overrides the current thread's CurrentUICulture property for all /// resource lookups using this strongly typed resource class. /// </summary> [global::System.ComponentModel.EditorBrowsableAttribute(global::System.ComponentModel.EditorBrowsableState.Advanced)] public static global::System.Globalization.CultureInfo Culture { get { return resourceCulture; } set { resourceCulture = value; } } /// <summary> /// Looks up a localized string similar to Direct field. /// </summary> public static string DirectFieldFormula { get { return ResourceManager.GetString("DirectFieldFormula", resourceCulture); } } } }

    Read the article

  • Trying to run multiple HTTP requests in parallel, but being limited by Windows (registry)

    - by Nailuj
    I'm developing an application (winforms C# .NET 4.0) where I access a lookup functionality from a 3rd party through a simple HTTP request. I call an url with a parameter, and in return I get a small string with the result of the lookup. Simple enough. The challenge is however, that I have to do lots of these lookups (a couple of thousands), and I would like to limit the time needed. Therefore I would like to run requests in parallel (say 10-20). I use a ThreadPool to do this, and the short version of my code looks like this: public void startAsyncLookup(Action<LookupResult> returnLookupResult) { this.returnLookupResult = returnLookupResult; foreach (string number in numbersToLookup) { ThreadPool.QueueUserWorkItem(lookupNumber, number); } } public void lookupNumber(Object threadContext) { string numberToLookup = (string)threadContext; string url = @"http://some.url.com/?number=" + numberToLookup; WebClient webClient = new WebClient(); Stream responseData = webClient.OpenRead(url); LookupResult lookupResult = parseLookupResult(responseData); returnLookupResult(lookupResult); } I fill up numbersToLookup (a List<String>) from another place, call startAsyncLookup and provide it with a call-back function returnLookupResult to return each result. This works, but I found that I'm not getting the throughput I want. Initially I thought it might be the 3rd party having a poor system on their end, but I excluded this by trying to run the same code from two different machines at the same time. Each of the two took as long as one did alone, so I could rule out that one. A colleague then tipped me that this might be a limitation in Windows. I googled a bit, and found amongst others this post saying that by default Windows limits the number of simultaneous request to the same web server to 4 for HTTP 1.0 and to 2 for HTTP 1.1 (for HTTP 1.1 this is actually according to the specification (RFC2068)). The same post referred to above also provided a way to increase these limits. By adding two registry values to [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Internet Settings] (MaxConnectionsPerServer and MaxConnectionsPer1_0Server), I could control this myself. So, I tried this (sat both to 20), restarted my computer, and tried to run my program again. Sadly though, it didn't seem to help any. I also kept an eye on the Resource Monitor (see screen shot) while running my batch lookup, and I noticed that my application (the one with the title blacked out) still only was using two TCP connections. So, the question is, why isn't this working? Is the post I linked to using the wrong registry values? Is this perhaps not possible to "hack" in Windows any longer (I'm on Windows 7)? Any ideas would be highly appreciated :) And just in case anyone should wonder, I have also tried with different settings for MaxThreads on ThreadPool (everyting from 10 to 100), and this didn't seem to affect my throughput at all, so the problem shouldn't be there either.

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • how to invoke an activity of a library project from an android apps

    - by Austin
    I have an open source android code that I need to use in my android apps. It has all the source code as well as resource files, manifest files and class path. It can be compiled as a separate android apps. I have constraints for using the open source. 1. I can't change a single line of code. 2. I can't use it as a separate apps. These constraints are non negotiable. What I have done is I have compiled the open source as class library(in Eclipse: Project Properties-Android- Tick check box Is Library). This has resulted in generation of .class files(in bin) for the java files and resource files. This open source has an android activity that i want to open from my application. So I have linked the directory of these sets of class files in the source section of my java build path( in .classpath). I have declared the activity in my manifest file with proper action intent filters. Now when I am trying to call activity from my code, its not working. Cleaning and rebuilding doesn't help. However, if I build the open source project and my apps in the same workspace of eclipse and link the open source in my apps in exact same manner it works fine. I am not able to identify the difference. All settings seems to be same(all files are identical in both the cases). But only in the second case it works. I have tried it as jar file also. I have build the open source as project library and exported it into a jar file(excluding manifest file). But in that case I am getting the following error UNEXPECTED TOP-LEVEL EXCEPTION: java.lang.IllegalArgumentException: already added: .... Conversion to Dalvik format failed with error 1 This I guess is coming because the android library(2.2) has been included twice in my apps( one for building my apps & another for building the open source). I dont know how to avoid this. Cleaning the project doesn't help. What i require is to use the open source and invoking it's activities in my apps without violating the constraints. If i can use the open source as bunch of .class files then great, or else any other way will do fine. Please look into it and let me know. Thanks

    Read the article

  • Replacing div html() by echoing PHP - how to?

    - by Jared
    Hello, I have a multiple product elements that get their class and ID from PHP: $product1["codename"] = "product-1"; $product1["short"] = "Great Product 1"; $product2["codename"] = "product-2"; $product2["short"] = "Great Product 2"; <div class="leftMenuProductButton" id="'. $product1["codename"].'" >'. $product1["short"].'</div> <div class="leftMenuProductButton" id="'. $product2["codename"].'" >'. $product2["short"].'</div> These display as: <div class="leftMenuProductButton" id="product-1" > Great Product 1</div> <div class="leftMenuProductButton" id="product-2" > Great Product 2</div> In the page, I have an element that I want to replace the HTML: <div id="productPopupTop"> //Replace this content </div> Using jquery, I have tried the following: $( '.leftMenuProductButton' ).hover ( function () { var swapNAME = $(this).attr("id"); //gets the ID, #product-1, #product-2 etc. This works. $("#productPopupTop").html(' <? echo $' + swapNAME + '["short"] ?>'); //This is supposed to get something like <? echo $product-1["short"] ?> This doesn't appear to work. }, function () { //this is just here for later }); If I try to do an alert('<? echo $' + swapNAME + '["short"] ?>'); it will literally display something like <? echo $product-1["short"] ?> Please note that both the Javascript and the PHP are externally linked in a PHP file (index.php <<< (js.js, products.php) QUESTION: How do I replace the HTML() of #productPopupTop with the ["short"] of a product? If I should use Ajax, how would I code this?

    Read the article

  • Sorting a list of numbers with modified cost

    - by David
    First, this was one of the four problems we had to solve in a project last year and I couldn’t find a suitable algorithm so we handle in a brute force solution. Problem: The numbers are in a list that is not sorted and supports only one type of operation. The operation is defined as follows: Given a position i and a position j the operation moves the number at position i to position j without altering the relative order of the other numbers. If i j, the positions of the numbers between positions j and i - 1 increment by 1, otherwise if i < j the positions of the numbers between positions i+1 and j decreases by 1. This operation requires i steps to find a number to move and j steps to locate the position to which you want to move it. Then the number of steps required to move a number of position i to position j is i+j. We need to design an algorithm that given a list of numbers, determine the optimal (in terms of cost) sequence of moves to rearrange the sequence. Attempts: Part of our investigation was around NP-Completeness, we make it a decision problem and try to find a suitable transformation to any of the problems listed in Garey and Johnson’s book: Computers and Intractability with no results. There is also no direct reference (from our point of view) to this kind of variation in Donald E. Knuth’s book: The art of Computer Programing Vol. 3 Sorting and Searching. We also analyzed algorithms to sort linked lists but none of them gives a good idea to find de optimal sequence of movements. Note that the idea is not to find an algorithm that orders the sequence, but one to tell me the optimal sequence of movements in terms of cost that organizes the sequence, you can make a copy and sort it to analyze the final position of the elements if you want, in fact we may assume that the list contains the numbers from 1 to n, so we know where we want to put each number, we are just concerned with minimizing the total cost of the steps. We tested several greedy approaches but all of them failed, divide and conquer sorting algorithms can’t be used because they swap with no cost portions of the list and our dynamic programing approaches had to consider many cases. The brute force recursive algorithm takes all the possible combinations of movements from i to j and then again all the possible moments of the rest of the element’s, at the end it returns the sequence with less total cost that sorted the list, as you can imagine the cost of this algorithm is brutal and makes it impracticable for more than 8 elements. Our observations: n movements is not necessarily cheaper than n+1 movements (unlike swaps in arrays that are O(1)). There are basically two ways of moving one element from position i to j: one is to move it directly and the other is to move other elements around i in a way that it reaches the position j. At most you make n-1 movements (the untouched element reaches its position alone). If it is the optimal sequence of movements then you didn’t move the same element twice.

    Read the article

< Previous Page | 102 103 104 105 106 107 108 109 110 111 112 113  | Next Page >