Search Results

Search found 7821 results on 313 pages for 'high dpi'.

Page 252/313 | < Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >

  • How to Manage CSS Explosion

    - by Jason
    I have been heavily relying on CSS for a website that I am working on (currently, everything is done as property values within each tag on the website and I'm trying to get away from that to make updates significantly easier). The problem I am running into, is I'm starting to get a bit of "CSS explosion" going on. It is becoming difficult for me to decide how to best organize and abstract data within the CSS file. For example: I am using a large number of div tags within the website (previously it was completely tables based). So I'm starting to get a lot of CSS that looks like this... div.title { background-color: Blue; color: White; text-align: center; } div.footer { /* Stuff Here */ } div.body { /* Stuff Here */ } etc. It's not too bad yet, but since I am learning here, I was wondering if recommendations could be made on how best to organize the various parts of a CSS file. What I don't want to get to is where I have a separate CSS attribute for every single thing on my website (which I have seen happen), and I always want the CSS file to be fairly intuitive. (P.S. I do realize this is a very generic, high-level question. My ultimate goal is to make it easy to use the CSS files and demonstrate their power to increase the speed of web development so other individuals that may work on this site in the future will also get into the practice of using them rather than hard-coding values everywhere.)

    Read the article

  • Help needed in grokking password hashes and salts

    - by javafueled
    I've read a number of SO questions on this topic, but grokking the applied practice of storing a salted hash of a password eludes me. Let's start with some ground rules: a password, "foobar12" (we are not discussing the strength of the password). a language, Java 1.6 for this discussion a database, postgreSQL, MySQL, SQL Server, Oracle Several options are available to storing the password, but I want to think about one (1): Store the password hashed with random salt in the DB, one column Found on SO and elsewhere is the automatic fail of plaintext, MD5/SHA1, and dual-columns. The latter have pros and cons MD5/SHA1 is simple. MessageDigest in Java provides MD5, SHA1 (through SHA512 in modern implementations, certainly 1.6). Additionally, most RDBMSs listed provide methods for MD5 encryption functions on inserts, updates, etc. The problems become evident once one groks "rainbow tables" and MD5 collisions (and I've grokked these concepts). Dual-column solutions rest on the idea that the salt does not need to be secret (grok it). However, a second column introduces a complexity that might not be a luxury if you have a legacy system with one (1) column for the password and the cost of updating the table and the code could be too high. But it is storing the password hashed with a random salt in single DB column that I need to understand better, with practical application. I like this solution for a couple of reasons: a salt is expected and considers legacy boundaries. Here's where I get lost: if the salt is random and hashed with the password, how can the system ever match the password? I have theory on this, and as I type I might be grokking the concept: Given a random salt of 128 bytes and a password of 8 bytes ('foobar12'), it could be programmatically possible to remove the part of the hash that was the salt, by hashing a random 128 byte salt and getting the substring of the original hash that is the hashed password. Then re hashing to match using the hash algorithm...??? So... any takers on helping. :) Am I close?

    Read the article

  • Matplotlib PDF export uses wrong font

    - by Konrad Rudolph
    I want to generate high-quality diagrams for a presentation. I’m using Python’s matplotlib to generate the graphics. Unfortunately, the PDF export seems to ignore my font settings. I tried setting the font both by passing a FontProperties object to the text drawing functions and by setting the option globally. For the record, here is a MWE to reproduce the problem: import scipy import matplotlib matplotlib.use('cairo') import matplotlib.pylab as pylab import matplotlib.font_manager as fm data = scipy.arange(5) for font in ['Helvetica', 'Gill Sans']: fig = pylab.figure() ax = fig.add_subplot(111) ax.bar(data, data) ax.set_xticks(data) ax.set_xticklabels(data, fontproperties = fm.FontProperties(family = font)) pylab.savefig('foo-%s.pdf' % font) In both cases, the produced output is identical and uses Helvetica (and yes, I do have both fonts installed). Just to be sure, the following doesn’t help either: matplotlib.rc('font', family = 'Gill Sans') Finally, if I replace the backend, instead using the native viewer: matplotlib.use('MacOSX') I do get the correct font displayed – but only in the viewer GUI. The PDF output is once again wrong. To be sure – I can set other fonts – but only other classes of font families: I can set serif fonts or fantasy or monospace. But all sans-serif fonts seem to default to Helvetica.

    Read the article

  • ASP.NET MVC View ReRenders Part of Itself

    - by Jason
    In all my years of .NET programming I have not run across a bug as weird as this one. I discovered the problem because some elements on the page were getting double-bound by jQuery. After some (ridiculous) debugging, I finally discovered that once the view is completely done rendering itself and all its children partial views, it goes back to an arbitrary yet consistent location and re-renders itself. I have been pulling my hair out about this for two days now and I simply cannot get it to render itself only once! For lack of any better debugging idea, I've painstakingly added logging tracers throughout the HTML just so I can pin down what may be causing this. For instance, this code ($log just logs to the console): ... <script type="text/javascript">var x = 0; $log('1');</script> <div id="new-ad-form"> <script type="text/javascript">x++;$log('1.5', x);</script> ... will yield ... <--- this happens before this snippet 1 1.5 1 ... 10 <--- bottom of my form, after snippet 1.5 2 <--- beginning of part that runs again! ... 9 <--- this happens after this snippet I've searched my codebase high and low, but there is NOTHING that says that it should re-render part of a page. I'm wondering if the jQueryUI has anything to do with it, as #new-ad-form is the container for a jQueryUI dialog box. If this is potentially the case, here's my init code for that: $('#new-ad-form').dialog({ autoOpen: false, modal: true, width: 470, title: 'Create A New Ad', position: ['center', 35], close: AdEditor.reset });

    Read the article

  • Help, my CentOS servers keep going down , No route to host after a random uptime [closed]

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • How do I process the configure file when cross-compiling with mingw?

    - by vy32
    I have a small open source program that builds with an autoconf configure script. I ran configure I tried to compile with: make CC="/opt/local/bin/i386-mingw32-g++" That didn't work because the configure script found include files that were not available to the mingw system. So then I tried: ./configure CC="/opt/local/bin/i386-mingw32-g++" But that didn't work; the configure script gives me this error: ./configure: line 5209: syntax error near unexpected token `newline' ./configure: line 5209: ` *_cv_*' Because of this code: # The following way of writing the cache mishandles newlines in values, # but we know of no workaround that is simple, portable, and efficient. # So, we kill variables containing newlines. # Ultrix sh set writes to stderr and can't be redirected directly, # and sets the high bit in the cache file unless we assign to the vars. ( for ac_var in `(set) 2>&1 | sed -n 's/^\(a-zA-Z_a-zA-Z0-9_*\)=.*/\1/p'`; do eval ac_val=\$$ac_var case $ac_val in #( *${as_nl}*) case $ac_var in #( *_cv_* fi Which is generated then the AC_OUTPUT is called. Any thoughts? Is there a correct way to do this?

    Read the article

  • Poor execution plans when using a filter and CONTAINSTABLE in a query

    - by Paul McLoughlin
    We have an interesting problem that I was hoping someone could help to shed some light on. At a high level the problem is as below: The following query executes quickly (1 second): SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID but if we add a filter to the query, then it takes approximately 2 minutes to return: SELECT SA.* FROM cg.SEARCHSERVER_ACTYS AS SA JOIN CONTAINSTABLE(CG.SEARCHSERVER_ACTYS, NOTE, 'reports') AS T1 ON T1.[Key]=SA.UNIQUE_ID WHERE SA.CHG_DATE'19 Feb 2010' Looking at the execution plan for the two queries, I can see that in the second case there are two places where there are huge differences between the actual and estimated number of rows, these being: 1) For the FulltextMatch table valued function where the estimate is approx 22,000 rows and the actual is 29 million rows (which are then filtered down to 1670 rows before the join) and 2) For the index seek on the full text index, where the estimate is 1 row and the actual is 13,000 rows As a result of the estimates, the optimiser is choosing to use a nested loops join (since it assumes a small number of rows) hence the plan is inefficient. We can work around the problem by either (a) parameterising the query and adding an OPTION (OPTIMIZE FOR UNKNOWN) to the query or (b) by forcing a HASH JOIN to be used. In both of these cases the query returns in sub 1 second and the estimates appear reasonable. My question really is 'why are the estimates being used in the poorly performing case so wildly inaccurate and what can be done to improve them'? Statistics are up to date on the indexes on the indexed view being used here. Any help greatly appreciated.

    Read the article

  • Is the Windows dev environment worth the cost?

    - by MCS
    I recently made the move from Linux development to Windows development. And as much of a Linux enthusiast that I am, I have to say - C# is a beautiful language, Visual Studio is terrific, and now that I've bought myself a trackball my wrist has stopped hurting from using the mouse so much. But there's one thing I can't get past: the cost. Windows 7, Visual Studio, SQL Server, Expression Blend, ViEmu, Telerik, MSDN - we're talking thousands for each developer on the project! You're definitely getting something for your money - my question is, is it worth it? [Not every developer needs all the aforementioned tools - but have you ever heard of anyone writing C# code without Visual Studio? I've worked on pretty large software projects in Linux without having to pay for any development tool whatsoever.] Now obviously, if you're already a Windows shop, it doesn't pay to retrain all your developers. And if you're looking to develop a Windows desktop app, you just can't do that in Linux. But if you were starting a new web application project and could hire developers who are experts in whatever languages you want, would you still choose Windows as your development platform despite the high cost? And if yes, why?

    Read the article

  • Reasonable expectation to support new Operating Systems?

    - by Neil N
    My company has a desktop app originally developed for Windows XP. The original programmer has since been fired (fired with extreme prejudice I might add). I have fixed the app various times but overall try to avoid it, it is a mess and the only real way to fix it is to completely rewrite it, which could take a year. We have been trying to "forget" about this app, and instead steer clients towards our web version, which is more up to date, easier to maintain, easier to extend, and WAY easier to support. Most clients agree, the web version is just better all around. However we have one client that insists on using the desktop app. The app required a little duct tape to get working on Vista, but now completely breaks on Windows 7. I'm not even sure WHAT all the fixes are to get it working on Win7 (the current time estimate stands at "miracle") but after both installing the RELEASE build, and running the DEBUG build from Visual Studio, the app has errors on nearly every user action, and from what I can see from a high level test run, none of them are related. Since Windows 7 did not exist when this app was developed, is my company really expected to make all the required changes to make it function as "smoothly" as it did on XP?

    Read the article

  • Fetching real time data from excel

    - by Umesh Sharma
    I am seriouly looking for your valuable help first time here. If possible, plese help me. I am developing a VB.NET app in which i read "real time data" from a excel sheet using "Microsoft.Office.Interop.Excel" i.e. excel automation. All cells in excel sheet are fetching stock data from some LOCAL DDE Server like "=XYZ|Bid!GOLD", "=XYZ|Bid!SILVER", "=XYZ|Ask!SILVER" and so on... Some cells also having fixed values like "Symbol", "Bid Rate", "32.90" etc. Values of DDE mapped cells (i.e. =XYZ|xxxx!yyy) are continuously changing. THE PROBLEM is here..."FIXED values" from excel cells are coming quite ok to my app but all DDE mapped cells values are coming "-2146826246" (When datasource local dde server ON) or "-2146826265" (OFF). Although, if i use C#.NET, it's all ok but not with Vb.NET. I want to display range of excel (A1 to J50) into VB.NET ListView which are changing in every 200ms (5 times in every 1 second) ================ Important ====================================================== Is it possible to BIND "listview items/columns values" with "excel cells" or some local memory variables ?? Currently, i am reading excel "cell by cell" and trying to put values in .NET listview but CPU USES are very high as well as it's toooo slow process. If yes, then how please ? I am a VFP developer but new to .NET It's very easy in VFP then why not in .NET ?? Please guide me, if someone has the solution...

    Read the article

  • Caching page by parts; how to pass variables calculated in cached parts into never-cached parts?

    - by Kirzilla
    Hello, Let's imagine that I have a code like this... if (!$data = $cache->load("part1_cache_id")) { $item_id = $model->getItemId(); ob_start(); echo 'Here is the cached item id: '.$item_id; $data = ob_get_contents(); ob_end_clean(); $cache->save($data, "part1_cache_id"); } echo $data; echo never_cache_function($item_id); if (!$data_2 = $cache->load("part2_cache_id")) { ob_start(); echo 'Here is the another cached part of the page...'; $data_2 = ob_get_contents(); ob_end_clean(); $cache->save("part2_cache_id"); } echo $data_2; As far as you can see I need to pass $item_id variable into never_cache_function, but if fist part is cached (part1_cache_id) then I have no way to get $item_id value. I see the only solution - serialize all data from fist part (including $item_id value); then cache serialized string and unserialize it everytime when script is executed... Something like this... if (!$data = $cache->load("part1_cache_id")) { $item_id = $model->getItemId(); $data['item_id'] = $item_id; ob_start(); echo 'Here is the cached item id: '.$item_id; $data['html'] = ob_get_contents(); ob_end_clean(); $cache->save( serialize($data), "part1_cache_id" ); } $data = unserialize($data); echo $data['html'] echo never_cache_function($data['item_id']); Is there any other ways for doing such trick? I'm looking for the most high performance solution. Thank you UPDATED And another question is - how to implement such caching into controller without separating page into two templates? Is it possible? PS: Please, do not suggest Smarty, I'm really interested in implementing custom caching.

    Read the article

  • verilog / systemverilog -- What is the behavior of blocking statements across two always blocks?

    - by miles.sherman
    I am wondering about the behavior of the below code. There are two always blocks, one is combinational to calculate the next_state signal, the other is sequential which will perform some logic and determine whether or not to shutdown the system. It does this by setting the shutdown_now signal high and then calling state <= next_state. My question is if the conditions become true that the shutdown_now signal is set (during clock cycle n) in a blocking manner before the state <= next_state line, will the state during clock cycle n+1 be SHUTDOWN or RUNNING? In other words, does the shutdown_now = 1'b1 line block across both state machines since the state signal is dependent on it through the next_state determination? enum {IDLE, RUNNING, SHUTDOWN} state, next_state; logic shutdown_now; // State machine (combinational) always_comb begin case (state) IDLE: next_state <= RUNNING; RUNNING: next_state <= shutdown ? SHUTDOWN : RUNNING; SHUTDOWN: next_state <= SHUTDOWN; default: next_state <= SHUTDOWN; endcase end // Sequential Behavior always_ff @ (posedge clk) begin // Some code here if (/*some condition) begin shutdown_now = 1'b0; end else begin shutdown_now = 1'b1; end state <= next_state; end

    Read the article

  • Classification: Dealing with Abstain/Rejected Class

    - by abner.ayala
    I am asking for your input and/help on a classification problem. If anyone have any references that I can read to help me solve my problem even better. I have a classification problem of four discrete and very well separated classes. However my input is continuous and has a high frequency (50Hz), since its a real-time problem. The circles represent the clusters of the classes, the blue line the decision boundary and Class 5 equals the (neutral/resting do nothing class). This class is the rejected class. However the problem is that when I move from one class to the other I activate a lot of false positives in the transition movements, since the movement is clearly non-linear. For example, every time I move from class 5 (neutral class) to 1 I first see a lot of 3's before getting to the 1 class. Ideally, I will want my decision boundary to look like the one in the picture below where the rejected class is Class =5. Has a higher decision boundary than the others classes to avoid misclassification during transition. I am currently implementing my algorithm in Matlab using naive bayes, kNN, and SVMs optimized algorithms using Matlab. Question: What is the best/common way to handle abstain/rejected classes classes? Should I use (fuzzy logic, loss function, should I include resting cluster in the training)?

    Read the article

  • Automatically grow document view of NSScrollView using auto layout?

    - by Monolo
    Is there a simple way to get an NSScrollView to adapt to its document view changing size when using autolayout (the Lion feature)? I have tried to call both setNeedsUpdateConstraints: and setNeedsLayout: on the document view, the clip view and the scroll view, without any results. fittingSize of the document view reports the correct size. An NSPopover in conjunction with an NSViewController handles this nicely, with the popover growing and shrinking as needed, and I was hoping to get a similar simple and robust behaviour with the scroll view. I have checked the documentation for scroll views, but they don't seem to be updated to use autolayout. Edited to clarify: The problem I experience is that the document view, which holds subviews, is not re-sized when the subviews change their size, even if they call invalidateIntrinsicContentSize. The contents of the document view are hence clipped to the original size of the document view as they grow. The document view is created in a nib and set as the scroll view's document view in an awakeFromBib method. What I hoped to obtain was that the document view frame would automatically be adjusted to when its fittingSize changes, and the scrollbars updated accordingly. NSPopover does something similar - provided that the subviews of the content controller's view have the constraints set right and various content hugging values are high enough (higher than the hidden popover window's hight constraint priority, for one).

    Read the article

  • How to get height for NSAttributedString at a fixed width

    - by bonaldi
    I want to do some drawing of NSAttributedStrings in fixed-width boxes, but am having trouble calculating the right height they'll take up when drawn. So far, I've tried: Calling - (NSSize) size, but the results are useless (for this purpose), as they'll give whatever width the string desires. Calling - (void)drawWithRect:(NSRect)rect options:(NSStringDrawingOptions)options with a rect shaped to the width I want and NSStringDrawingUsesLineFragmentOrigin in the options, exactly as I'm using in my drawing. The results are ... difficult to understand; certainly not what I'm looking for. (As is pointed out in a number of places, including this Cocoa-Dev thread). Creating a temporary NSTextView and doing: [[tmpView textStorage] setAttributedString:aString]; [tmpView setHorizontallyResizable:NO]; [tmpView sizeToFit]; When I query the frame of tmpView, the width is still as desired, and the height is often correct ... until I get to longer strings, when it's often half the size that's required. (There doesn't seem to be a max size being hit: one frame will be 273.0 high (about 300 too short), the other will be 478.0 (only 60-ish too short)). I'd appreciate any pointers, if anyone else has managed this.

    Read the article

  • What is the correct JNA mapping for UniChar on Mac OS X?

    - by Trejkaz
    I have a C struct like this: struct HFSUniStr255 { UInt16 length; UniChar unicode[255]; }; I have mapped this in the expected way: public class HFSUniStr255 extends Structure { public UInt16 length; // UInt16 is just an IntegerType with length 2 for convenience. public /*UniChar*/ char[] unicode = new char[255]; //public /*UniChar*/ byte[] unicode = new byte[255*2]; //public /*UniChar*/ UInt16[] unicode = new UInt16[255]; public HFSUniStr255() { } public HFSUniStr255(Pointer pointer) { super(pointer); } } If I use this version, I get every second character of the string into my char[] ("aits D" for "Macintosh HD".) I am assuming that this is something to do with being on a 64-bit platform and JNA mapping the value to a 32-bit wchar_t but then chopping off the high 16 bits on each wchar_t on copying them back. If I use the byte[] version, I get data which decodes correctly using the UTF-16LE charset. If I use the UInt16[] version, I get the right code point for each character but it is then inconvenient to convert them back into a string. Is there some way I can define my type as char[], and yet have it convert correctly?

    Read the article

  • proper use of volatile keyword

    - by luke
    I think i have a pretty good idea about the volatile keyword in java, but i'm thinking about re-factoring some code and i thought it would be a good idea to use it. i have a class that is basically working as a DB Cache. it holds a bunch of objects that it has read from a database, serves requests for those objects, and then occasionally refreshes the database (based on a timeout). Heres the skeleton public class Cache { private HashMap mappings =....; private long last_update_time; private void loadMappingsFromDB() { //.... } private void checkLoad() { if(System.currentTimeMillis() - last_update_time > TIMEOUT) loadMappingsFromDB(); } public Data get(ID id) { checkLoad(); //.. look it up } } So the concern is that loadMappingsFromDB could be a high latency operation and thats not acceptable, So initially i thought that i could spin up a thread on cache startup and then just have it sleep and then update the cache in the background. But then i would need to synchronize my class (or the map). and then i would just be trading an occasional big pause for making every cache access slower. Then i thought why not use volatile i could define the map reference as volatile private volatile HashMap mappings =....; and then in get (or anywhere else that uses the mappings variable) i would just make a local copy of the reference: public Data get(ID id) { HashMap local = mappings; //.. look it up using local } and then the background thread would just load into a temp table and then swap the references in the class HashMap tmp; //load tmp from DB mappings = tmp;//swap variables forcing write barrier Does this approach make sense? and is it actually thread-safe?

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Copy Small Bitmaps on to Large Bitmap with Transparency Blend: What is faster than graphics.DrawImag

    - by Glenn
    I have identified this call as a bottleneck in a high pressure function. graphics.DrawImage(smallBitmap, x , y); Is there a faster way to blend small semi transparent bitmaps into a larger semi transparent one? Example Usage: XY[] locations = GetLocs(); Bitmap[] bitmaps = GetBmps(); //small images sizes vary approx 30px x 30px using (Bitmap large = new Bitmap(500, 500, PixelFormat.Format32bppPArgb)) using (Graphics largeGraphics = Graphics.FromImage(large)) { for(var i=0; i < largeNumber; i++) { //this is the bottleneck largeGraphics.DrawImage(bitmaps[i], locations[i].x , locations[i].y); } } var done = new MemoryStream(); large.Save(done, ImageFormat.Png); done.Position = 0; return (done); The DrawImage calls take a small 32bppPArgb bitmaps and copies them into a larger bitmap at locations that vary and the small bitmaps might only partially overlap the larger bitmaps visible area. Both images have semi transparent contents that get blended by DrawImage in a way that is important to the output. I've done some testing with BitBlt but not seen significant speed improvement and the alpha blending didn't come out the same in my tests. I'm open to just about any method including a better call to bitblt or unsafe c# code.

    Read the article

  • Returning large collections from WCF Serivce

    - by Nate Bross
    I'm trying to determine the best approach for building a WCF Service, and the area I'm struggling with most is returning lists of objects. The built-in maxMessageSize of 64k seems pretty high, and I really don't want to bump it up (quick googling finds 100s of places bumping the maxMessageSize up to multi-gigabyte range which seems foolish). But, when I'm returning a collection of objects (~150 items) I am exceeding the default 64k. I'm almost to the point of returning my own class which inherits IEnumerable and has properties for hasNext, hasPrevious and PageSize so that I can implement paging on the client side -- this seems like alot of code. The other option is to jackup the maxMessageSize and hope for the best, but that feels wrong. All other aspects of my service are working great, its just returning large collectiosn where I'm having issues. For background, there are two types of consumers of this service, UI applications which will be primarly web and/or wpf applications, and data processing applications, .NET console apps, and maybe some other non-UI apps. For the UI applications, I would like to keep them responsive and keep the messageSize low, on the console apps it doesn't matter as much as they are just pulling data down to do processing and push it back up to the service.

    Read the article

  • Is there stl and utf8 friendly C++ Wrapper for ICU, or other powerful unicode library

    - by artyom
    Hello, I need a good Unicode library for C++. I need Transformations in Unicode sensitive way. For example sort all strings in case insensitive way and get their first characters for index. Convert to upper and to lower various Unicode strings. Split text in reasonable position -- words that would work for Chinese and Japanese as well. Formatting numbers, dates in locale sensitive way (should be thread safe). Transparent support of utf8 (primary internal representation). As far as I know the best library is ICU. However, I can't find normal developer friendly API documentation with examples. Also as far as I see, it is not too friendly with modern C++ design, work with STL and so on. Like this std::string msg; unistring umsg.from_utf8(msg); unistring::word_iterator wi; for(wi=umsg.words().begin(),n=0;wi!=usmg.words().wi_end(),n<10;++wi,++n) ; msg=umsg.substr(umsg.words().begin(),wi).to_utf8(); cout<<_("Five 10 words are ")<<msg; Does anybody know good STL friendly ICU wrapper released under Open Source license preferred permissive like MIT or Boost, but others LGPLv2 compatible are ok as well. Is there another high quality library similar to ICU? Platform: UNIX/POSIX, Windows support is not required. Thanks, Artyom Edit: Unfortunatly I wasn't logged in so I can't make asnver accepted... I had attached the ansver by myself.

    Read the article

  • use proxy in python to fetch a webpage

    - by carmao
    I am trying to write a function in Python to use a public anonymous proxy and fetch a webpage, but I got a rather strange error. The code (I have Python 2.4): import urllib2 def get_source_html_proxy(url, pip, timeout): # timeout in seconds (maximum number of seconds willing for the code to wait in # case there is a proxy that is not working, then it gives up) proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request(url) sock=urllib2.urlopen(req) timp=0 # a counter that is going to measure the time until the result (webpage) is # returned while 1: data = sock.read(1024) timp=timp+1 if len(data) < 1024: break timpLimita=50000000 * timeout if timp==timpLimita: # 5 millions is about 1 second break if timp==timpLimita: print IPul + ": Connection is working, but the webpage is fetched in more than 50 seconds. This proxy returns the following IP: " + str(data) return str(data) else: print "This proxy " + IPul + "= good proxy. " + "It returns the following IP: " + str(data) return str(data) # Now, I call the function to test it for one single proxy (IP:port) that does not support user and password (a public high anonymity proxy) #(I put a proxy that I know is working - slow, but is working) rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) print rez The error: Traceback (most recent call last): File "./public_html/cgi-bin/teste5.py", line 43, in ? rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) File "./public_html/cgi-bin/teste5.py", line 18, in get_source_html_proxy sock=urllib2.urlopen(req) File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/urllib2.py", line 573, in lambda r, proxy=url, type=type, meth=self.proxy_open: \ File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open if '@' in host: TypeError: iterable argument required I do not know why the character "@" is an issue (I have no such in my code. Should I have?) Thanks in advance for your valuable help.

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • Simplifying Testing through design considerations while utilizing dependency injection

    - by Adam Driscoll
    We are a few months into a green-field project to rework the Logic and Business layers of our product. By utilizing MEF (dependency injection) we have achieved high levels of code coverage and I believe that we have a pretty solid product. As we have been working through some of the more complex logic I have found it increasingly difficult to unit test. We are utilizing the CompositionContainer to query for types required by these complex algorithms. My unit tests are sometimes difficult to follow due to the lengthy mock object setup process that must take place, just right, to allow for certain circumstances to be verified. My unit tests often take me longer to write than the code that I'm trying to test. I realize this is not only an issue with dependency injection but with design as a whole. Is poor method design or lack of composition to blame for my overly complex tests? I've tried base classing tests, creating commonly used mock objects and ensuring that I utilize the container as much as possible to ease this issue but my tests always end up quite complex and hard to debug. What are some tips that you've seen to keep such tests concise, readable, and effective?

    Read the article

  • undefined reference to function, despite giving reference in c

    - by Jamie Edwards
    I'm following a tutorial, but when it comes to compiling and linking the code I get the following error: /tmp/cc8gRrVZ.o: In function `main': main.c:(.text+0xa): undefined reference to `monitor_clear' main.c:(.text+0x16): undefined reference to `monitor_write' collect2: ld returned 1 exit status make: *** [obj/main.o] Error 1 What that is telling me is that I haven't defined both 'monitor_clear' and 'monitor_write'. But I have, in both the header and source files. They are as follows: monitor.c: // monitor.c -- Defines functions for writing to the monitor. // heavily based on Bran's kernel development tutorials, // but rewritten for JamesM's kernel tutorials. #include "monitor.h" // The VGA framebuffer starts at 0xB8000. u16int *video_memory = (u16int *)0xB8000; // Stores the cursor position. u8int cursor_x = 0; u8int cursor_y = 0; // Updates the hardware cursor. static void move_cursor() { // The screen is 80 characters wide... u16int cursorLocation = cursor_y * 80 + cursor_x; outb(0x3D4, 14); // Tell the VGA board we are setting the high cursor byte. outb(0x3D5, cursorLocation >> 8); // Send the high cursor byte. outb(0x3D4, 15); // Tell the VGA board we are setting the low cursor byte. outb(0x3D5, cursorLocation); // Send the low cursor byte. } // Scrolls the text on the screen up by one line. static void scroll() { // Get a space character with the default colour attributes. u8int attributeByte = (0 /*black*/ << 4) | (15 /*white*/ & 0x0F); u16int blank = 0x20 /* space */ | (attributeByte << 8); // Row 25 is the end, this means we need to scroll up if(cursor_y >= 25) { // Move the current text chunk that makes up the screen // back in the buffer by a line int i; for (i = 0*80; i < 24*80; i++) { video_memory[i] = video_memory[i+80]; } // The last line should now be blank. Do this by writing // 80 spaces to it. for (i = 24*80; i < 25*80; i++) { video_memory[i] = blank; } // The cursor should now be on the last line. cursor_y = 24; } } // Writes a single character out to the screen. void monitor_put(char c) { // The background colour is black (0), the foreground is white (15). u8int backColour = 0; u8int foreColour = 15; // The attribute byte is made up of two nibbles - the lower being the // foreground colour, and the upper the background colour. u8int attributeByte = (backColour << 4) | (foreColour & 0x0F); // The attribute byte is the top 8 bits of the word we have to send to the // VGA board. u16int attribute = attributeByte << 8; u16int *location; // Handle a backspace, by moving the cursor back one space if (c == 0x08 && cursor_x) { cursor_x--; } // Handle a tab by increasing the cursor's X, but only to a point // where it is divisible by 8. else if (c == 0x09) { cursor_x = (cursor_x+8) & ~(8-1); } // Handle carriage return else if (c == '\r') { cursor_x = 0; } // Handle newline by moving cursor back to left and increasing the row else if (c == '\n') { cursor_x = 0; cursor_y++; } // Handle any other printable character. else if(c >= ' ') { location = video_memory + (cursor_y*80 + cursor_x); *location = c | attribute; cursor_x++; } // Check if we need to insert a new line because we have reached the end // of the screen. if (cursor_x >= 80) { cursor_x = 0; cursor_y ++; } // Scroll the screen if needed. scroll(); // Move the hardware cursor. move_cursor(); } // Clears the screen, by copying lots of spaces to the framebuffer. void monitor_clear() { // Make an attribute byte for the default colours u8int attributeByte = (0 /*black*/ << 4) | (15 /*white*/ & 0x0F); u16int blank = 0x20 /* space */ | (attributeByte << 8); int i; for (i = 0; i < 80*25; i++) { video_memory[i] = blank; } // Move the hardware cursor back to the start. cursor_x = 0; cursor_y = 0; move_cursor(); } // Outputs a null-terminated ASCII string to the monitor. void monitor_write(char *c) { int i = 0; while (c[i]) { monitor_put(c[i++]); } } void monitor_write_hex(u32int n) { s32int tmp; monitor_write("0x"); char noZeroes = 1; int i; for (i = 28; i > 0; i -= 4) { tmp = (n >> i) & 0xF; if (tmp == 0 && noZeroes != 0) { continue; } if (tmp >= 0xA) { noZeroes = 0; monitor_put (tmp-0xA+'a' ); } else { noZeroes = 0; monitor_put( tmp+'0' ); } } tmp = n & 0xF; if (tmp >= 0xA) { monitor_put (tmp-0xA+'a'); } else { monitor_put (tmp+'0'); } } void monitor_write_dec(u32int n) { if (n == 0) { monitor_put('0'); return; } s32int acc = n; char c[32]; int i = 0; while (acc > 0) { c[i] = '0' + acc%10; acc /= 10; i++; } c[i] = 0; char c2[32]; c2[i--] = 0; int j = 0; while(i >= 0) { c2[i--] = c[j++]; } monitor_write(c2); } monitor.h: // monitor.h -- Defines the interface for monitor.h // From JamesM's kernel development tutorials. #ifndef MONITOR_H #define MONITOR_H #include "common.h" // Write a single character out to the screen. void monitor_put(char c); // Clear the screen to all black. void monitor_clear(); // Output a null-terminated ASCII string to the monitor. void monitor_write(char *c); #endif // MONITOR_H common.c: // common.c -- Defines some global functions. // From JamesM's kernel development tutorials. #include "common.h" // Write a byte out to the specified port. void outb ( u16int port, u8int value ) { asm volatile ( "outb %1, %0" : : "dN" ( port ), "a" ( value ) ); } u8int inb ( u16int port ) { u8int ret; asm volatile ( "inb %1, %0" : "=a" ( ret ) : "dN" ( port ) ); return ret; } u16int inw ( u16int port ) { u16int ret; asm volatile ( "inw %1, %0" : "=a" ( ret ) : "dN" ( port ) ); return ret; } // Copy len bytes from src to dest. void memcpy(u8int *dest, const u8int *src, u32int len) { const u8int *sp = ( const u8int * ) src; u8int *dp = ( u8int * ) dest; for ( ; len != 0; len-- ) *dp++ =*sp++; } // Write len copies of val into dest. void memset(u8int *dest, u8int val, u32int len) { u8int *temp = ( u8int * ) dest; for ( ; len != 0; len-- ) *temp++ = val; } // Compare two strings. Should return -1 if // str1 < str2, 0 if they are equal or 1 otherwise. int strcmp(char *str1, char *str2) { int i = 0; int failed = 0; while ( str1[i] != '\0' && str2[i] != '\0' ) { if ( str1[i] != str2[i] ) { failed = 1; break; } i++; } // Why did the loop exit? if ( ( str1[i] == '\0' && str2[i] != '\0' || (str1[i] != '\0' && str2[i] =='\0' ) ) failed =1; return failed; } // Copy the NULL-terminated string src into dest, and // return dest. char *strcpy(char *dest, const char *src) { do { *dest++ = *src++; } while ( *src != 0 ); } // Concatenate the NULL-terminated string src onto // the end of dest, and return dest. char *strcat(char *dest, const char *src) { while ( *dest != 0 ) { *dest = *dest++; } do { *dest++ = *src++; } while ( *src != 0 ); return dest; } common.h: // common.h -- Defines typedefs and some global functions. // From JamesM's kernel development tutorials. #ifndef COMMON_H #define COMMON_H // Some nice typedefs, to standardise sizes across platforms. // These typedefs are written for 32-bit x86. typedef unsigned int u32int; typedef int s32int; typedef unsigned short u16int; typedef short s16int; typedef unsigned char u8int; typedef char s8int; void outb ( u16int port, u8int value ); u8int inb ( u16int port ); u16int inw ( u16int port ); #endif //COMMON_H main.c: // main.c -- Defines the C-code kernel entry point, calls initialisation routines. // Made for JamesM's tutorials <www.jamesmolloy.co.uk> #include "monitor.h" int main(struct multiboot *mboot_ptr) { monitor_clear(); monitor_write ( "hello, world!" ); return 0; } here is my makefile: C_SOURCES= main.c monitor.c common.c S_SOURCES= boot.s C_OBJECTS=$(patsubst %.c, obj/%.o, $(C_SOURCES)) S_OBJECTS=$(patsubst %.s, obj/%.o, $(S_SOURCES)) CFLAGS=-nostdlib -nostdinc -fno-builtin -fno-stack-protector -m32 -Iheaders LDFLAGS=-Tlink.ld -melf_i386 --oformat=elf32-i386 ASFLAGS=-felf all: kern/kernel .PHONY: clean clean: -rm -f kern/kernel kern/kernel: $(S_OBJECTS) $(C_OBJECTS) ld $(LDFLAGS) -o $@ $^ $(C_OBJECTS): obj/%.o : %.c gcc $(CFLAGS) $< -o $@ vpath %.c source $(S_OBJECTS): obj/%.o : %.s nasm $(ASFLAGS) $< -o $@ vpath %.s asem Hopefully this will help you understand what is going wrong and how to fix it :L Thanks in advance. Jamie.

    Read the article

< Previous Page | 248 249 250 251 252 253 254 255 256 257 258 259  | Next Page >