Search Results

Search found 17686 results on 708 pages for 'high level'.

Page 564/708 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • Writing to a log4net FileAppender with multiple threads performance problems

    - by Wayne
    TickZoom is a very high performance app which uses it's own parallelization library and multiple O/S threads for smooth utilization of multi-core computers. The app hits a bottleneck where users need to write information to a LogAppender from separate O/S threads. The FileAppender uses the MinimalLock feature so that each thread can lock and write to the file and then release it for the next thread to write. If MinimalLock gets disabled, log4net reports errors about the file being already locked by another process (thread). A better way for log4net to do this would be to have a single thread that takes care of writing to the FileAppender and any other threads simply add their messages to a queue. In that way, MinimalLock could be disabled to greatly improve performance of logging. Additionally, the application does a lot of CPU intensive work so it will also improve performance to use a separate thread for writing to the file so the CPU never waits on the I/O to complete. So the question is, does log4net already offer this feature? If so, how do you do enable threaded writing to a file? Is there another, more advanced appender, perhaps? If not, then since log4net is already wrapped in the platform, that makes it possible to implement a separate thread and queue for this purpose in the TickZoom code. Sincerely, Wayne

    Read the article

  • Compiler optimization causing the performance to slow down

    - by aJ
    I have one strange problem. I have following piece of code: template<clss index, class policy> inline int CBase<index,policy>::func(const A& test_in, int* srcPtr ,int* dstPtr) { int width = test_in.width(); int height = test_in.height(); double d = 0.0; //here is the problem for(int y = 0; y < height; y++) { //Pointer initializations //multiplication involving y //ex: int z = someBigNumber*y + someOtherBigNumber; for(int x = 0; x < width; x++) { //multiplication involving x //ex: int z = someBigNumber*x + someOtherBigNumber; if(soemCondition) { // floating point calculations } *dstPtr++ = array[*srcPtr++]; } } } The inner loop gets executed nearly 200,000 times and the entire function takes 100 ms for completion. ( profiled using AQTimer) I found an unused variable double d = 0.0; outside the outer loop and removed the same. After this change, suddenly the method is taking 500ms for the same number of executions. ( 5 times slower). This behavior is reproducible in different machines with different processor types. (Core2, dualcore processors). I am using VC6 compiler with optimization level O2. Follwing are the other compiler options used : -MD -O2 -Z7 -GR -GX -G5 -X -GF -EHa I suspected compiler optimizations and removed the compiler optimization /O2. After that function became normal and it is taking 100ms as old code. Could anyone throw some light on this strange behavior? Why compiler optimization should slow down performance when I remove unused variable ? Note: The assembly code (before and after the change) looked same.

    Read the article

  • Passing string with (accidental) escape character loses character even though it's a raw string

    - by Steen
    I have a function with a python doctest that fails because one of the test input strings has a backslash that's treated like an escape character even though I've encoded the string as a raw string. My doctest looks like this: >>> infile = [ "Todo: fix me", "/** todo: fix", "* me", "*/", r"""//\todo stuff to fix""", "TODO fix me too", "toDo bug 4663" ] >>> find_todos( infile ) ['fix me', 'fix', 'stuff to fix', 'fix me too', 'bug 4663'] And the function, which is intended to extract the todo texts from a single line following some variation over a todo specification, looks like this: todos = list() for line in infile: print line if todo_match_obj.search( line ): todos.append( todo_match_obj.search( line ).group( 'todo' ) ) And the regular expression called todo_match_obj is: r"""(?:/{0,2}\**\s?todo):?\s*(?P<todo>.+)""" A quick conversation with my ipython shell gives me: In [35]: print "//\todo" // odo In [36]: print r"""//\todo""" //\todo And, just in case the doctest implementation uses stdout (I haven't checked, sorry): In [37]: sys.stdout.write( r"""//\todo""" ) //\todo My regex-foo is not high by any standards, and I realize that I could be missing something here. EDIT: Following Alex Martellis answer, I would like suggestions on what regular expression would actually match the blasted r"""//\todo fix me""". I know that I did not originally ask for someone to do my homework, and I will accept Alex's answer as it really did answer my question (or confirm my fears). But I promise to upvote any good solutions to my problem here :) I'm using Python 2.6.4 (r264:75706, Dec 7 2009, 18:45:15) Thank you for reading this far (If you skipped directly down here, I understand)

    Read the article

  • Eclipse PDE - Plug-in, Feature, and Product Versioning

    - by Michael
    I am having much confusion over the process of upgrading version numbers in dependent plug-ins, features, and products in a fairly large eclipse workspace. I have made API changes to java code residing in an existing plug-in and thus requires an increase of the Major part of the version identifier. This plug-in serves as a dependency to a given feature, where the feature is later included in a product. From the documentation at http://wiki.eclipse.org/Version_Numbering, I understand (for the most part) when the proper number should be increased on the containing plug-in itself. However, how would this Major version number change on the plug-in affect dependent, "down-the-line" items (e.g., features, products)? For example, assume we have the typical "Hello World" setup as follows: Plug-in: com.example.helloworld, version 1.0.0 Feature: com.example.helloworld.feature, version 1.0.0 Product: com.example.helloworld.product, version 1.0.0 If I were to make an API change in the plug-in, this would require a version update to be that of 2.0.0. What would then be the version of the feature, 1.1.0? The same question can be applied for the product level as well (e.g., if the feature is 1.1.0 OR 2.0.0, what is the product version number)? I'm sure this is quite the newbie question so I apologize for wasting anyone's time and effort. I have searched for this type of content but all I am finding is are examples showing how to develop a plug-in, feature, product, and update site for the first time. The only other content related to my search has been developing feature patches and have not touched on the versioning aspect as much as I would prefer. I am having difficulty coming into (for the first time) an Eclipse RCP / PDE environment and need to learn the proper way and / or best practices for making such versioning updates and how to best reflect this throughout other dependent projects in the workspace.

    Read the article

  • Map tiling - What kind of projection?

    - by ikky
    Hi. I've taken a large image and divided it in to square tiles (256x256). It is made for google maps also, so the whole image is divided into z_x_y.png (Depending on zoom level). z=0 = 1x1 tile z=1 = 2x2 tilesthe z=2 = 4x4 tiles My imageMap is "flat" and is not based on a sphere like the worldmap. I'm gonna use this map on a windows mobile app (which has no google API), and all the "points of interests" is inserted into a database by longitude and latitude. And since i have to make this for the windows mobile, i just have XY coordinate system. Is it enough to just use this: MAP_WIDTH = 256*TILES_W; MAP_HEIGHT = 256*TILES_H; function convert(int lat, int lon) { int y = (int)((-1 * lat) + 90) * (MAP_HEIGHT / 180); int x = (int)(lon + 180) * (MAP_WIDTH / 360); ImagePoint p = new ImagePoint(x,y); // An object which holds the coordinates return p; } Or do i need a projection technique? Thanks in advance. Please ask, if something is unclear.

    Read the article

  • What are the primitive Forth operators?

    - by Barry Brown
    I'm interested in implementing a Forth system, just so I can get some experience building a simple VM and runtime. When starting in Forth, one typically learns about the stack and its operators (DROP, DUP, SWAP, etc.) first, so it's natural to think of these as being among the primitive operators. But they're not. Each of them can be broken down into operators that directly manipulate memory and the stack pointers. Later one learns about store (!) and fetch (@) which can be used to implement DUP, SWAP, and so forth (ha!). So what are the primitive operators? Which ones must be implemented directly in the runtime environment from which all others can be built? I'm not interested in high-performance; I want something that I (and others) can learn from. Operator optimization can come later. (Yes, I'm aware that I can start with a Turing machine and go from there. That's a bit extreme.) Edit: What I'm aiming for is akin to bootstrapping an operating system or a new compiler. What do I need do implement, at minimum, so that I can construct the rest of the system out of those primitive building blocks? I won't implement this on bare hardware; as an educational exercise, I'd write my own minimal VM.

    Read the article

  • Submit information to url, but also open PDF

    - by Mad Ducky Digital Branding
    I have a client whose desire is to have her Wordpress blog show a MailChimp form on her home page as a gateway to a .pdf. I need the following behavior to occur when the user clicks "Submit": execute the included MailChimp's javascript file; this ensures the form was properly filled, and then performs the sign-up to the newsletter list (don't need help with this part) then show the user an informational PDF for download or viewing EDIT: The logical order was flipped from when I originally posted this. The script should execute, and only if the script gets executed properly should the PDF show to the user Note: My experience level with HTML and PHP is 3/4, and with JS I am 2/4 EDIT: (seems more like 1/4 at this point lol). If my research is correct, PHP (server-side language) would be used to do that which the client wants. Additional validation is not necessary beyond what MailChimp's script provides (it ensures that user has submitted a completed form) is not necessary in this case (the client says it's ok if the e-mail isn't valid at all). EDIT: Reworded this sentence from original post to be more clear The .pdf URL and content is static, and simply needs to be shown, not generated. ----RESEARCH---- I know that the Mailchimp form uses the following line to actually submit the information, but I want to do the action mentioned below, as well as open the aforementioned .pdf: <form action="http://*BLAH*.us2.list-manage.com/subscribe/post?u=*BLAHBLAH*&amp;id=*BLAHBLAHBLAH*" method="post" id="mc-embedded-subscribe-form" name="mc-embedded-subscribe-form" class="validate" target="_blank"> I am reading on other sites that I can conceivably point "action" to a .php file, but if there is a way to do this with javascript - since its using the .js file that I created for that already anyways, then I would be most happy. Barring that, I'll take what I can get.. ----SOLUTION?---- ...

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • How Random is System.Guid.NewGuid()? (Take two)

    - by Vilx-
    Before you start marking this as a duplicate, read me out. The other question has a (most likely) incorrect accepted answer. I do not know how .NET generates its GUIDs, probably only Microsoft does, but there's a high chance it simply calls CoCreateGuid(). That function however is documented to be calling UuidCreate(). And the algorithms for creating an UUID are pretty well documented. Long story short, be as it may, it seems that System.Guid.NewGuid() indeed uses version 4 UUID generation algorithm, because all the GUIDs it generates matches the criteria (see for yourself, I tried a couple million GUIDs, they all matched). In other words, these GUIDs are almost random, except for a few known bits. This then again raises the question - how random IS this random? As every good little programmer knows, a pseudo-random number algorithm is only as random as its seed (aka entropy). So what is the seed for UuidCreate()? How ofter is the PRNG re-seeded? Is it cryptographically strong, or can I expect the same GUIDs to start pouring out if two computers accidentally call System.Guid.NewGuid() at the same time? And can the state of the PRNG be guessed if sufficiently many sequentially generated GUIDs are gathered?

    Read the article

  • Sending Email through Gmail

    - by persistence
    I am writing a program that send an email through GMail but I have serious Operation timeout error. What is the likely cause. class Mailer { MailMessage ms; SmtpClient Sc; public Mailer() { Sc = new SmtpClient("smtp.gmail.com"); //Sc.Credentials = CredentialCache.DefaultNetworkCredentials; Sc.EnableSsl = true; Sc.Port =465; Sc.Timeout = 900000000; Sc.DeliveryMethod = SmtpDeliveryMethod.Network; Sc.UseDefaultCredentials = false; Sc.Credentials = new NetworkCredential("uid", "mypss"); } public void MailTodaysBirthdays(List<Celebrant> TodaysCelebrant) { int i = TodaysCelebrant.Count(); foreach (Celebrant cs in TodaysCelebrant) { //if (IsEmail(cs.EmailAddress.ToString().Trim())) //{ ms = new MailMessage(); ms.To.Add(cs.EmailAddress); ms.From = new MailAddress("uid","Developers",System.Text.Encoding.UTF8); ms.Subject = "Happy Birthday "; String EmailBody = "Happy Birthday " + cs.FirstName; ms.Body = EmailBody; ms.Priority = MailPriority.High; try { Sc.Send(ms); } catch (Exception ex) { Sc.Send(ms); BirthdayServices.LogEvent(ex.Message.ToString(),EventLogEntryType.Error); } //} } } }

    Read the article

  • External API function calls AS3 control timeline

    - by giles
    I have function problem using this code (below), the embedded flash movieclip disappears or completely prevents the scrollto.js query to function in DW cs3. Communication between Flash and JavaScript is without problems, it is the call back I can't find to work and more frustratingly, should be simple, as no values are not required. So far, this has been hours of scouring the net without a workable end in sight...ahrr. What is a function for this to work? JavaScript – to call Flash event from HTML button link, placed between head tags function callExternalInterface() var flashMovie = window.document.menu; flashMovie.menu_up(value); menu_up is the string. Does anyone know of workable function for callback?? HTML <div id="btn_up"><a href="#top" name="charDev" id="charDev" onclick="">top</a></div> Pane navigation div that uses Scrollto.js query, and it's this link I need calling back to the embedded "menubtns.swf" (nested in "AS3Menu_javascript.swf") to play 5 frames of this movieclip, via a JS function. Embedded .swf code, using swfobject.js with allowScriptAccess=always <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" name="menu"<br/> width="251" height="251" id="menu"> <param name="movie" value="../~Assets/Flash/AS3Menu_javascript.swf" /> <param name="allowScriptAccess" value="always" /> <param name="movie" value="ExternalInterfaceScript.swf" /> <param name="quality" value="high" /> <object type="application/x-shockwave-flash" data="../~Assets/Flash/AS3Menu_javascript.swf" width="250" height="250"> <p>Alternative content</p> </object> </object> AS3 / Flash import flash.external.ExternalInterface;flash.system.Security.allowDomain(/sourceDomain/); ExternalInterface.addCallBack("menu_up", this, resetmenu); function resetmenu(){ gotoAndPlay:("frame label" / "number") }

    Read the article

  • Is there a way in .NET to access the bytecode/IL/CLR that is currently running?

    - by Alix
    Hi. I'd like to have access to the bytecode that is currently running or about to run in order to detect certain instructions and take specific actions (depending the instructions). In short, I'd like to monitor the bytecode in order to add safety control. Is this possible? I know there are some AOP frameworks that notify you of specific events, like an access to a field or the invocation of a method, but I'd like to skip that extra layer and just look at all the bytecode myself, throughout the entire execution of the application. I've already looked at the following questions (...among many many others ;) ):     Preprocessing C# - Detecting Methods     What CLR/.NET bytecode tools exist? as well as several AOP frameworks (although not in great detail, since they don't seem to do quite what I need) and I'm familiar with Mono.Cecil. I appreciate alternative suggestions, but I don't want to introduce the overhead of an AOP framework when what I actually need is access to the bytecode, without all the stuff they add on top to make it more user-friendly (... admittedly very useful stuff when you don't want to go low-level). Thanks :)

    Read the article

  • Source Control Checkin Comments at Top Of Source Files

    - by James Wiseman
    I've noticed a discrepancy with some source files in our system whereby some contain source-control checkin comments, and some do not. These comments are added automatically to the top of the file when it is checked in: * $Log: //vm1/Projects/Morpheus/Sleep.bdy-arc $ -- -- Rev 1.14 Apr 14 2009 15:32:52 John Smith --Fixed bugs 2292 and 2230. This seems to have been quite prevelant in all the compainies with which I have worked, but I must confess that I struggle to see the point. Generally the comments aren't that good, are ofen left by people who have long since departed, and even when they are of a high standard it is difficult to tie them to physical code changes. It also strikes me, that you are physically changing the file that you are checking in. Now, this may not be such a problem with files that will be compiled, but could be a disaster with others, e.g. JavaScript files. So really, my query is what was the motivation in concept behind providing this functionality in the first instance? Does anyone actually find these comments useful? Also, I would be curious to know if this was feature that is commonly supported within Source Control systems. I am aware of it with PVCS, VSS and Subversion (Subversion Keyword Substitution), however I wonder if it is also available in some of the more popular DVCSs. Your help, as always is much appreciated.

    Read the article

  • What programming languages do the top tier Universities teach?

    - by Simucal
    I'm constantly being inundated with articles and people talking about how most of today's Universities are nothing more than Java vocational schools churning out mediocre programmer after mediocre programmer. Our very own Joel Spolsky has his famous article, "The Perils of Java Schools." Similarly, Alan Kay, a famous Computer Scientist (and SO member) has said this in the past: "I fear — as far as I can tell — that most undergraduate degrees in computer science these days are basically Java vocational training." - Alan Kay (link) If the languages being taught by the schools are considered such a contributing factor to the quality of the school's program then I'm curious what languages do the "top-tier" computer science schools teach (MIT, Carnegie Mellon, Stanford, etc)? If the average school is performing so poorly due in large part the languages (or lack of) that they teach then what languages do the supposed "good" cs programs teach that differentiate them? If you can, provide the name of the school you attended, followed by a list of the languages they use throughout their coursework. Edit: Shog-9 asks why I don't get this information directly from the schools websites themselves. I would, but many schools websites don't discuss the languages they use in their class descriptions. Quite a few will say, "using high-level languages we will...", without elaborating on which languages they use. So, we should be able to get a pretty accurate list of languages taught at various well known institutions from the various SO members who have attended at them.

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • Per-pixel per-component alpha blending in Windows

    - by Crend King
    I have a 24-bit bitmaps with R, G, B color channels and a 24-bit bitmap with R, G, B alpha channels. I want to alpha blend the first bitmap to a HDC in GDI or RenderTarget in Direct2D with the alpha channels respectively. For example, suppose for one pixel, the bitmap color is (192, 192, 192), the HDC color is (0, 255, 255) and the alpha channels are (30, 40, 50). The final HDC color should be (22, 245, 242). I know I can BitBlt the HDC to a memory HDC first, do alpha blending by manually calculating the color of each pixel and finally BitBlt back. I just want to avoid the additional blitting and leave APIs do their job (faster since they are in kernel space). The first idea comes to my mind is to split the source bitmap into 3 red-only, green-only and blue-only 8-bit bitmaps, do normal alpha blending, then composite the 3 output bitmaps into the HDC. But I don't find a way to do the splitting and composition natively in Windows (would Direct2D layer help?). Also, the splitting and compositing may require many additional copying. The performance overhead would be too high. Or maybe do the alpha blending in 3 passes. Each pass apply the blending for one channel, while maintaining the other 2 unchanged. Thanks for any comment. EDIT: I found this question, and the answer should be good reference to this problem. However, besides AC_SRC_OVER, there is no other blending operation supported. Why don't Microsoft improve their API?

    Read the article

  • Compiling Visual c++ programs from the command line and msvcr90.dll

    - by Stanley kelly
    Hi, When I compile my Visual c++ 2008 express program from inside the IDE and redistribute it on another computer, It starts up fine without any dll dependencies that I haven't accounted for. When I compile the same program from the visual c++ 2008 command line under the start menu and redistribute it to the other computer, it looks for msvcr90.dll at start-up. Here is how it is compiled from the command line cl /Fomain.obj /c main.cpp /nologo -O2 -DNDEBUG /MD /ID:(list of include directories) link /nologo /SUBSYSTEM:WINDOWS /ENTRY:mainCRTStartup /OUT:Build\myprogram.ex e /LIBPATH:D:\libs (list of libraries) and here is how the IDE builds it based on the relevant parts of the build log. /O2 /Oi /GL /I clude" /I (list of includes) /D "WIN32" /D "NDEBUG" /D "_CONSOLE" /D "_UNICODE" /D "UNICODE" /FD /EHsc /MD /Gy /Yu"stdafx.h" /Fp"Release\myprogram" /Fo"Release\\" /Fd"Release\vc90.pdb" /W3 /c /Zi /TP /wd4250 /vd2 Creating command line "cl.exe @d:\myprogram\Release\RSP00000118003188.rsp /nologo /errorReport:prompt" /OUT:"D:\myprgram\Release\myprgram.exe" /INCREMENTAL:NO /LIBPATH:"d:\gtkmm\lib" /MANIFEST /MANIFESTFILE:"Release\myprogam.exe.intermediate.manifest" /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG /PDB:"d:\myprogram\Release\myprogram.pdb" /SUBSYSTEM:WINDOWS /OPT:REF /OPT:ICF /LTCG /ENTRY:"mainCRTStartup" /DYNAMICBASE /NXCOMPAT /MACHINE:X86 (list of libraries) Creating command line "link.exe @d:\myprogram\Release\RSP00000218003188.rsp /NOLOGO /ERRORREPORT:PROMPT" /outputresource:"..\Release\myprogram.exe;#1" /manifest .\Release\myprogram.exe.intermediate.manifest Creating command line "mt.exe @d:\myprogram\Release\RSP00000318003188.rsp /nologo" I would like to be able to compile it from the command line and not have it look for such a late version of the runtime dll, like the version compiled from the IDE seems not to do. Both versions pass /MD to the compiler, so i am not sure what to do.

    Read the article

  • Asymptotic complexity of a compiler

    - by Meinersbur
    What is the maximal acceptable asymptotic runtime of a general-purpose compiler? For clarification: The complexity of compilation process itself, not of the compiled program. Depending on the program size, for instance, the number of source code characters, statements, variables, procedures, basic blocks, intermediate language instructions, assembler instructions, or whatever. This is highly depending on your point of view, so this is a community wiki. See this from the view of someone who writes a compiler. Will the optimisation level -O4 ever be used for larger programs when one of its optimisations takes O(n^6)? Related questions: When is superoptimisation (exponential complexity or even incomputable) acceptable? What is acceptable for JITs? Does it have to be linear? What is the complexity of established compilers? GCC? VC? Intel? Java? C#? Turbo Pascal? LCC? LLVM? (Reference?) If you do not know what asymptotic complexity is: How long are you willing to wait until the compiler compiled your project? (scripting languages excluded)

    Read the article

  • Build error with variables and url_for in Flask

    - by Rob
    Have found one or two people on the interwebs with similar problems, but haven't seen a solution posted anywhere. I'm getting a build error from the code/template below, but can't figure out where the issue is or why it's occurring. It appears that the template isn't recognizing the function, but don't know why this would be occurring. Any help would be greatly appreciated - have been pounding my against the keyboard for two nights now. Function: @app.route('/viewproj/<proj>', methods=['GET','POST']) def viewproj(proj): ... Template Excerpt: {% for project in projects %} <li> <a href="{{ url_for('viewproj', proj=project.project_name) }}"> {{project.project_name}}</a></li> {% else %} No projects {% endfor %} Error log: https://gist.github.com/1684250 EDIT: Also wanted to include that it's not recognizing the variable "proj" when building the URL, so it's just appending the value as a parameter. Here's an example: //myproject/viewproj?projname=what+up Last few lines: [Wed Jan 25 09:47:34 2012] [error] [client 199.58.143.128] File "/srv/www/myproject.com/myproject/templates/layout.html", line 103, in top-level template code, referer: xx://myproject.com/ [Wed Jan 25 09:47:34 2012] [error] [client 199.58.143.128] {% block body %}{% endblock %}, referer: xx://myproject.com/ [Wed Jan 25 09:47:34 2012] [error] [client 199.58.143.128] File "/srv/www/myproject.com/myproject/templates/main.html", line 34, in block "body", referer: xx://myproject.com/ [Wed Jan 25 09:47:34 2012] [error] [client 199.58.143.128] , referer: xx://myproject.com/ [Wed Jan 25 09:47:34 2012] [error] [client 199.58.143.128] File "/usr/lib/python2.7/dist-packages/flask/helpers.py", line 195, in url_for, referer: xx://myproject.com/ [Wed Jan 25 09:47:34 2012] [error] [client 199.58.143.128] return ctx.url_adapter.build(endpoint, values, force_external=external), referer: xx://myproject.com/ [Wed Jan 25 09:47:34 2012] [error] [client 199.58.143.128] File "/usr/lib/pymodules/python2.7/werkzeug/routing.py", line 1409, in build, referer: xx://myproject.com/ [Wed Jan 25 09:47:34 2012] [error] [client 199.58.143.128] raise BuildError(endpoint, values, method), referer: xx://myproject.com/ [Wed Jan 25 09:47:34 2012] [error] [client 199.58.143.128] BuildError: ('viewproj', {'proj': '12th'}, None), referer: xx://myproject.com/

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • How do I develop browser plugins with cross-platform and cross-browser compatibility in mind?

    - by Schnapple
    My company currently has a product which relies on a custom, in-house ActiveX control. The technology it employs (TWAIN) is itself cross-platform by design, but our solution is obviously limited to Internet Explorer on Windows. Long term we would like to become cross-browser and cross-platform (i.e., support other browsers on Windows, support the Macintosh or Linux). Obviously if we wanted to support Firefox on Windows I would need to write a plugin for it. But if we wanted to support the Macintosh, how do I attack that? Is it possible to compile a version of the Firefox plugin that runs on the Mac? Would I be remiss to not also support Safari on the Mac? Are there any plugins which are cross-browser on a platform? (i.e., can any browsers run plugins for other browsers) Since TWAIN is so low-level to the operating system, I do not think Java would be a solution in any capacity, but I could be wrong. What do people generally do when they want to support multiple platforms with a process that will need to be cross-platform and cross-browser compatible?

    Read the article

  • MVC View Model Intellisense / Compile error

    - by Marty Trenouth
    I have one Library with my ORM and am working with a MVC Application. I have a problem where the pages won't compile because the Views can't see the Model's properties (which are inherited from lower level base classes). They system throws a compile error saying that 'object' does not contain a definition for 'ID' and no extension method 'ID' accepting a first argument of type 'object' could be found (are you missing a using directive or an assembly reference?) implying that the View is not seeing the model. In the Controller I have full access to the Model and have check the Inherits from portion of the view to validate the correct type is being passed. Controller: return View(new TeraViral_Blog()); View: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<com.models.TeraViral_Blog>" %> <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> Index2 </asp:Content> <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> <h2>Index2</h2> <fieldset> <legend>Fields</legend> <p> ID: <%= Html.Encode(Model.ID) %> </p> </fieldset> </asp:Content>

    Read the article

  • Apache rewrite rules redux

    - by AlexanderJohannesen
    I've got a REST framework that when plopped into any directory should Just Work(TM), and it seems to work fine when I've got projects in subdirectories, but not if it's in root. So, given a few example directories; / /project1 /bingo/project2 /hoopla/doopla/minor/project3 All of these works fine, except I'm getting "funnies"* when the project runs in the root directory (bit hard to explain, I suppose, but the second level rewrites are not working properly). Here's my attempt at a generic .htaccess file: RewriteEngine On RewriteRule ^static/ - [L] RewriteCond %{REQUEST_URI} ^$ RewriteRule .* ./ [R,L] RewriteRule ^index.php - [L] RewriteRule (.*) index.php?q=$1 [QSA,PT] (And yes, all projects have a subdirectory ./static which is ignored by rewrites) What I'm trying to achieve is a set of rewrite rules that work for most cases (which is, again, plonking the project in a directory the webserver serves). I'm not a rewrite rules wiz by a long shot, and any good advice and gotchas would be appreciated (and yes, I've gone through too many introductory articles. I need some serious juice.) More info on the funnies; my webserver has docroot in one spot (under /usr/share/apache2/default-site/), but a set of rules that says that /projects is pulled in from somewhere else that's not a subdirectory of docroot (/home/user/Projects/). When I go there, I get a list of /projects subdirectories, and if one of those subdirectories gets called (restmapp) with the proposed rewrite rules, I get ; The requested URL /home/user/Projects/restmapp/index.php was not found on this server.

    Read the article

  • .NET: How do I know if I have an unmanaged resource?

    - by Shiftbit
    I've read that unmanaged resource are considered file handles, streams, anything low level. Does MSDN or any other source explain how to recognize an unmanaged resource? I can't seem to find any examples on the net that show unmanaged code, all the examples just have comments that say unmanaged code here. Can someone perhaps provide a realworld example where I would handle an unmanaged resources in an IDispose interface? I provided the IDisposable interface for your convience. How do I identify an unmanaged resource? Thanks in advance! IDisposable Reference Public Class Sample : Implements IDisposable Private disposedValue As Boolean = False 'To detect redundant calls ' IDisposable Protected Overridable Sub Dispose(ByVal disposing As Boolean) If Not Me.disposedValue Then If disposing Then ' TODO: free other state (managed objects). End If ' TODO: free your own state (unmanaged objects). ' TODO: set large fields to null. End If Me.disposedValue = True End Sub ' This code added by Visual Basic to correctly implement the disposable pattern. Public Sub Dispose() Implements IDisposable.Dispose ' Do not change this code. Put cleanup code in Dispose(ByVal disposing As Boolean) above. Dispose(True) GC.SuppressFinalize(Me) End Sub End Class

    Read the article

  • is delete p where p is a pointer to array a memory leak ?

    - by Eli
    following a discussion in a software meeting I setup to find out if deleting an dynamically allocated primitive array with plain delete will cause a memory leak. I have written this tiny program and compiled with visual studio 2008 running on windows XP: #include "stdafx.h" #include "Windows.h" const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*1000; i++) { int* p = new int[1024*100000]; for (int j =0;j<BLOCK_SIZE;j++) p[j]= j % 2; Sleep(1000); delete p; } } I than monitored the memory consumption of my application using task manager, surprisingly the memory was allocated and freed correctly, allocated memory did not steadily increase as was expected I've modified my test program to allocate a non primitive type array : #include "stdafx.h" #include "Windows.h" struct aStruct { aStruct() : i(1), j(0) {} int i; char j; } NonePrimitive; const unsigned long BLOCK_SIZE = 1024*100000; int _tmain() { for (unsigned int i =0; i < 1024*100000; i++) { aStruct* p = new aStruct[1024*100000]; Sleep(1000); delete p; } } after running for for 10 minutes there was no meaningful increase in memory I compiled the project with warning level 4 and got no warnings. is it possible that the visual studio run time keep track of the allocated objects types so there is no different between delete and delete[] in that environment ?

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >