Search Results

Search found 4605 results on 185 pages for 'crazy doc'.

Page 17/185 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • What (tf) are the secrets behind PDF memory allocation (CGPDFDocumentRef)

    - by Kai
    For a PDF reader I want to prepare a document by taking 'screenshots' of each page and save them to disc. First approach is CGPDFDocumentRef document = CGPDFDocumentCreateWithURL((CFURLRef) someURL); for (int i = 1; i<=pageCount; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; CGPDFPageRef page = CGPDFDocumentGetPage(document, i); ...//getting + manipulating graphics context etc. ... CGContextDrawPDFPage(context, page); ... UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext(); ...//saving the image to disc [pool drain]; } CGPDFDocumentRelease(document); This results in a lot of memory which seems not to be released after the first run of the loop (preparing the 1st document), but no more unreleased memory in additional runs: MEMORY BEFORE: 6 MB MEMORY DURING 1ST DOC: 40 MB MEMORY AFTER 1ST DOC: 25 MB MEMORY DURING 2ND DOC: 40 MB MEMORY AFTER 2ND DOC: 25 MB .... Changing the code to for (int i = 1; i<=pageCount; i++) { CGPDFDocumentRef document = CGPDFDocumentCreateWithURL((CFURLRef) someURL); NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; CGPDFPageRef page = CGPDFDocumentGetPage(document, i); ...//getting + manipulating graphics context etc. ... CGContextDrawPDFPage(context, page); ... UIImage *resultingImage = UIGraphicsGetImageFromCurrentImageContext(); ...//saving the image to disc CGPDFDocumentRelease(document); [pool drain]; } changes the memory usage to MEMORY BEFORE: 6 MB MEMORY DURING 1ST DOC: 9 MB MEMORY AFTER 1ST DOC: 7 MB MEMORY DURING 2ND DOC: 9 MB MEMORY AFTER 2ND DOC: 7 MB .... but is obviously a step backwards in performance. When I start reading a PDF (later in time, different thread) in the first case no more memory is allocated (staying at 25 MB), while in the second case memory goes up to 20 MB (from 7). In both cases, when I remove the CGContextDrawPDFPage(context, page); line memory is (nearly) constant at 6 MB during and after all preparations of documents. Can anybody explain whats going on there?

    Read the article

  • Lotus view column compare to string/integer

    - by Kris.Mitchell
    I have a lotus view that stores a number. I need to perform some math against the value, but I am having a lot of problems getting the types to match up. doc.numOfGold = numGold and CInt(doc.numOfGold) = numGold and CInt(doc.numOfGold) = CInt(numGold) and doc.numOfGold = CInt(numGold) all return type mismatch. I've tried changing the column properties to treat it as a decimal, with no better luck. Any thoughts? Thanks!

    Read the article

  • document.evalute function giving exception

    - by R_Dhorawat
    i have a code like res = doc.evalute(xpathExpr,doc,function(prefix) { return namespaces[prefix] || null;}, XPathResult.ANY_TYPE,null ); here doc is DOM document node when i run for loop like this for(i in doc)alert(i); it gives evalute method but when i tried to use this method on dom node it giving me error like xpathResult not defined... i'm working in android browser thanks in advance....

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Nginx Subdomain Problem

    - by user292299
    i can't access my subdomain on localhost. my localdomain is localhost.dev and it's work.but i want to auto subdomain for php script (username.localhost.dev) i try this server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev ***.localhost.dev**; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } it's not working.i change server_name for testing server_name localhost.dev asd.localhost.dev; i can't access asd.localhost.dev and i try this double server{} section # You may add here your # server { # ... # } # statements for each of your virtual hosts to this file ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } ############################### server { access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name asd.localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ =404; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ =404; # } #} i can't success

    Read the article

  • Can't boot Ubuntu 12.04 from external Hard Drive using Mac

    - by Catgirl the Crazy
    Recently, I upgraded the RAM and hard drive on my Early 2008 Macbook to improve the performance. Rather than throw away the old hard drive, I bought an enclosure for it to turn it into an external hard drive, and, since all the data was migrated to my new drive, I decided to install Ubuntu on it for funsies (note: I am a near-total Ubuntu n00b). My first attempt to install Ubuntu didn't work (it gave me errors about not being able to find the BIOS or something), but my second attempt finished successfully (can't remember what, if anything, I did different). However, when I plug the external drive into my Macbook, it gives me a message saying it can't read the disk. Moreover, when I go into the Startup Manager (i.e.: what you get when you turn on the Macbook while holding the option key), the external drive is not one of the available startup disks. I thought this might be because I have an older Macbook, so I tried booting it with my mom's Late 2011 Macbook, and got the same results. Then I tried booting it through my dad's Dell laptop that runs Windows 7, and that time it worked. This is really counter intuitive to me, since the hard drive originally came from a Macbook, so if anything you'd think it would be less compatible with the Windows laptop than the Macbook. In case it helps, here's a link to a picture of how I set up the partition table while doing the install (not shown there is the fact that I checked the "Format?" box next to the /boot partition, since it gave me a warning when I tried to continue the installation without doing so) Anyone have any clue at all? If it helps, the hard drive I'm using is a 120GB 5400-rpm Serial ATA hard disk drive.

    Read the article

  • Finding direction of travel in a world with wrapped edges

    - by crazy
    I need to find the shortest distance direction from one point in my 2D world to another point where the edges are wrapped (like asteroids etc). I know how to find the shortest distance but am struggling to find which direction it's in. The shortest distance is given by: int rows = MapY; int cols = MapX; int d1 = abs(S.Y - T.Y); int d2 = abs(S.X - T.X); int dr = min(d1, rows-d1); int dc = min(d2, cols-d2); double dist = sqrt((double)(dr*dr + dc*dc)); Example of the world : : T : :--------------:--------- : : : S : : : : : : T : : : :--------------: In the diagram the edges are shown with : and -. I've shown a wrapped repeat of the world at the top right too. I want to find the direction in degrees from S to T. So the shortest distance is to the top right repeat of T. but how do I calculate the direction in degreed from S to the repeated T in the top right? I know the positions of both S and T but I suppose I need to find the position of the repeated T however there more than 1. The worlds coordinates system starts at 0,0 at the top left and 0 degrees for the direction could start at West. It seems like this shouldn’t be too hard but I haven’t been able to work out a solution. I hope somone can help? Any websites would be appreciated.

    Read the article

  • Why would I want to install node.js in my Rails Application?

    - by Crazy JIm
    Okay guys, I'm super confused. I thought node.js was a sever side framwork, basically the js version of Ruby's Rails or PHP's Zend. However, I'm having some difficulty with turbolinks, and it seems to be the way to fix it is through installing node.js I mean, I don't understand this at all. How can two frameworks work together like this? Also, it's not a gem (that REALLY would have confused me), you have to install node.js it onto your local machine by running (in the case of Ubuntu) sudo apt-get install nodejs Firstly, how does this totally separate framwork have any bearing on rails? Secondly, surely this isn't fixing the problem forever? When you specify a gem in your gemfile, the server knows what external libraries to install. How does the server know to install nodejs?

    Read the article

  • Customer won't decide, how to deal?

    - by Crazy Eddie
    I write software that involves the use of measured quantities, many input by the user, most displayed, that are fed into calculation models to simulate various physical thing-a-majigs. We have created a data type that allows us to associate a numeric value with a unit, we call these "quantities" (big duh). Quantities and units are unique to dimension. You can't attach kilogram to a length for example. Math on quantities does automatic unit conversion to SI and the type is dimension safe (you can't assign a weight to a pressure for example). Custom UI components have been developed that display the value and its unit and/or allow the user to edit them. Dimensionless quantities, having no units, are a single, custom case implemented within the system. There's a set of related quantities such that our target audience apparently uses them interchangeably. The quantities are used in special units that embed the conversion factors for the related quantity dimensions...in other words, when using these units converting from one to another simply involves multiplying the value by 1 to the dimensional difference. However, conversion to/from the calculation system (SI) still involves these factors. One of these related quantities is a dimensionless one that represents a ratio. I simply can't get the "customer" to recognize the necessity of distinguishing these values and their use. They've picked one and want to use it everywhere, customizing the way we deal with it in special places. In this case they've picked one of the dimensions that has a unit...BUT, they don't want there to be a unit (GRR!!!). This of course is causing us to implement these special overrides for our UI elements and such. That of course is often times forgotten and worse...after a couple months everyone forgets why it was necessary and why we're using this dimensional value, calling it the wrong thing, and disabling the unit. I could just ignore the "customer" and implement the type as the dimensionless quantity, which makes most sense. However, that leaves the team responsible for figuring it out when they've given us a formula using one of the other quantities. We have to not only figure out that it's happening, we have to decide what to do. This isn't a trivial deal. The other option is just to say to hell with it, do it the customer's way, and let it waste continued time and effort because it's just downright confusing as hell. However, I can't count the amount of times someone has said, "Why is this being done this way, it makes no sense at all," and the team goes off the deep end trying to figure it out. What would you do? Currently I'm still attempting to convince them that even if they use terms interchangeably, we at the least can't do that within the product discussion. Don't have high hopes though.

    Read the article

  • How much effort should you put into a junior developer?

    - by Crazy Eddie
    At what point should one give up? I've tried helping them out by having them shadow me. We agree to break a minute, and then they go missing in action for a while...then just go back to their desk. Even when I know they've done this, part of me feels like I shouldn't have to go get them but that they should be showing interest in learning. Frankly, it's a bunch of time I don't have explaining things as I go when I could just do it. Am I expecting too much to expect that if they want to learn they'll make sure I know they're ready and willing? They go to meetings that they where not told they had to, good, but then sit in the corner and sleep...bad. I don't even know what to do with that. Sometimes I give them something small to do and they do it great, so I give them something just a touch harder and they totally fail, hard. Check in things without testing them. Part of me thinks that maybe I should be spending more time with them but at the same time I don't see a lot of interest and I really, honestly don't have time teaching the same things over and over. Sometimes I get asked questions that are really, really easy to answer if you just do a little bit of your own work trying to find out. Other times I'm not asked anything. I'm sure I could be doing better but honestly...I don't really want to anymore.

    Read the article

  • Acceptance tests done first...how can this be accomplished?

    - by Crazy Eddie
    The basic gist of most Agile methods is that a feature is not "done" until it's been developed, tested, and in many cases released. This is supposed to happen in quick turnaround chunks of time such as "Sprints" in the Scrum process. A common part of Agile is also TDD, which states that tests are done first. My team works on a GUI program that does a lot of specific drawing and such. In order to provide tests, the testing team needs to be able to work with something that at least attempts to perform the things they are trying to test. We've found no way around this problem. I can very much see where they are coming from because if I was trying to write software that targeted some basically mysterious interface I'd have a very hard time. Although we have behavior fairly well specified, the exact process of interacting with various UI elements when it comes to automation seems to be too unique to a feature to allow testers to write automated scripts to drive something that does not exist. Even if we could, a lot of things end up turning up later as having been missing from the specification. One thing we considered doing was having the testers write test "scripts" that are more like a set of steps that must be performed, as described from a use-case perspective, so that they can be "automated" by a human being. This can then be performed by the developer(s) writing the feature and/or verified by someone else. When the testers later get an opportunity they automate the "script" for regression purposes mainly. This didn't end up catching on in the team though. The testing part of the team is actually falling behind us by quite a margin. This is one reason why the apparently extra time of developing a "script" for a human being to perform just did not happen....they're under a crunch to keep up with us developers. If we waited for them, we'd get nothing done. It's not their fault really, they're a bottle neck but they're doing what they should be and working as fast as possible. The process itself seems to be set up against them. Very often we end up having to go back a month or more in what we've done to fix bugs that the testers have finally gotten to checking. It's an ugly truth that I'd like to do something about. So what do other teams do to solve this fail cascade? How can we get testers ahead of us and how can we make it so that there's actually time for them to write tests for the features we do in a sprint without making us sit and twiddle our thumbs in the meantime? As it's currently going, in order to get a feature "done", using agile definitions, would be to have developers work for 1 week, then testers work the second week, and developers hopefully being able to fix all the bugs they come up with in the last couple days. That's just not going to happen, even if I agreed it was a reasonable solution. I need better ideas...

    Read the article

  • Adjust sprite bounds of the visible part of texture

    - by Crazy D0G
    Is there any way to adjust the boundaries of the visible part of the sprite? To make it easier to understand: I have a texture, such as shown at figure 1. Then I break it into pieces and fill the resulting fragments using PRKit (wood texture on figure 2 and 3). But the resulting fragments have the transparent (green color on figure 2 and 3) and when creating a sprite from the fragments they have the size of the initial texture. Is there a way to get rid of this transparency and to adjust the size of the visible part (wood texture), openGL or cocos2d-x means? Maybe it help - draw() method from PRKit: void PRFilledPolygon::draw() { //CCNode::draw(); glDisableClientState(GL_COLOR_ARRAY); // we have a pointer to vertex points so enable client state glBindTexture(GL_TEXTURE_2D, texture->getName()); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ONE_MINUS_SRC_ALPHA); glVertexPointer(2, GL_FLOAT, 0, areaTrianglePoints); glTexCoordPointer(2, GL_FLOAT, 0, textureCoordinates); glDrawArrays(GL_TRIANGLES, 0, areaTrianglePointCount); glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE); //Restore texture matrix and switch back to modelview matrix glEnableClientState(GL_COLOR_ARRAY);}

    Read the article

  • Prevent APT from overwriting JCE jars

    - by Doc
    My server runs a Java application that requires I replace a few java library files with ones I downloaded on my own. This has to do with JCE security extensions and isn't really relevant to my question. I've found that these library files tend to get overwritten by apt when it later updates my java package. Is there a apt-friendly way of masking these specific files so apt won't touch them? Potential Solutions I'm considering just removing the write flag from the files, though I'm expecting this will cause apt to spew its guts everywhere when it later tries to overwrite them? Perhaps there's a java custom library directory I don't know of, where I can park my files and they'll be loaded instead of the package's defaults? The last-resort option I'm considering is writing a cron job to periodically replace the files with my versions. I hate this option.

    Read the article

  • http requests, using sprites and file sizes -

    - by crazy sarah
    Hi all I'm in the process of finding out all about sprites and how they can speed up your pages. So I've used spriteMe to create a overall sprite image which is 130kb, this is made up of 14 images with a combined total size of about 65kb So is it better to have one http request and a file size of 130kb or 14 requests for a total of 65kb? Also there is a detailed image which has been put into the spite which caused it's size to go up by about 60kb odd, this used to be a seperate jpg image which was only 30kb. Would I be better off having it seperate and suffering the additional request?

    Read the article

  • Simple C: atof giving wrong value [migrated]

    - by Doc
    I have a program that reads input from a singe line(string obviously) and organizes it into arrays. The problem I have is that at one point the program reads two different values and returns the first one twice. Initially I thought the program was reading the same value twice but when I tested it turned out that it got the correct one but is inputting the wrong one. for example Input: 2 0.90 0.75 0.7 0.65 sorry to snip it (while(fgets (string[test], sizeof(string[test]),ifp)) pch = strtok_r(NULL, " ", &prog); tem3 = atoi(pch); while (loop<tem3) { pch=strtok_r(NULL," ",&prog); venseatfloat[test][loop][DISCOUNT][OCCUPIED]=(float)atof(pch); printf("%f is discount\t",venseatfloat[test][loop][DISCOUNT][OCCUPIED]); pch=strtok_r(NULL, " ", &prog); strcpy(temp, pch); venseatfloat[test][loop][REGULAR][OCCUPIED]=(float)atof(pch); printf("%s is the string but %.3f is regular\n", temp ,venseatfloat[test][loop][DISCOUNT][OCCUPIED]); loop++; } output: >0.900000 is discount 0.75 is the string but 0.900 is regular >0.700000 is discount 0.65 is the string but 0.700 is regular What is going on?

    Read the article

  • Ubuntu 12.10 - Dual monitors

    - by crazy coder
    I'm trying to set up a dual monitor with my laptop (Dell XPS L502X) and a monitor that I recently bought (Dell U2312HM). The cable are alright, because I tried this on Windows and all is fine. In Ubuntu 12.10 doesn't work. If I go to System Settings-Displays, button Detect Displays doesn't do anything. I may also say that I use Bumblebee, and my sudo nvidia-settings command only shows a windows with only nvidia-setting Configuration option.

    Read the article

  • How to reverse engineer the SEO on a website?

    - by Startup Crazy
    I have read this question. My question is a bit different from it. I want to know how can I reverse engineer another website that is ranking the best for some keywords. For example some website called www.bla.com is there and it ranks high for many keywords and I want to learn from it how can my website be of the same authority and get the same ranking (or probably better ranking if I found something that they are missing). Can anyone enlist it as a procedure, how to reverse engineer a website?

    Read the article

  • Simple C: How do I scan this information in properly?

    - by Doc
    OK this is a simple question but for some reason I just can't get it right. I have to scan from a file hundreds of lines of code and store it in a array (which I can normally do a ok job with) however At one point the code will specify a number that then corresponds to the next batch of chars ints and floats going into various arrays. As I know I am not describing this correctly here is a example. one line of the file I am reading will contain something close to this. 0221 T 2 S P 850 150 0.90 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Name_of_place 0104 L 1 F 400 1.00 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Ballroom the problem I am having is This part here 0221 T 2 S P 850 150 0.90 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Name_of_place 0104 L 1 F 400 1.00 0.75 500 24 2 2012 G A 7 9600.00 0.1 1000 Ballroom The rest after this is Generally the exact same however at this point the number at the front descides all the values that are going in. I am almost completely lost on how to write a way that can scan this and store the data into arrays correctly

    Read the article

  • [Scala] Applying overloaded, typed methods on a collection

    - by stephanos
    I'm quite new to Scala and struggling with the following: I have database objects (type of BaseDoc) and value objects (type of BaseVO). Now there are multiple convert methods (all called 'convert') that take an instance of an object and convert it to the other type accordingly - like this: def convert(doc: ClickDoc): ClickVO = ... def convert(doc: PointDoc): PointVO = ... def convert(doc: WindowDoc): WindowVO = ... Now I sometimes need to convert a list of objects. How would I do this - I tried: def convert[D <: BaseDoc, V <: BaseVO](docs: List[D]):List[V] = docs match { case List() => List() case xs => xs.map(doc => convert(doc)) } Which results in 'overloaded method value convert with alternatives ...'. I tried to add manifest information to it, but couldn't make it work. I couldn't even create one method for each because it'd say that they have the same parameter type after type erasure (List). Ideas welcome!

    Read the article

  • Brother bPAC SDK - Examples only print after Form is shown

    - by Scoregraphic
    Hi there We have a small Brother Barcode printer which we like to control from a WCF Service. Brother has an API called bPAC SDK version 3 which allows to print those labels. But the problem arises, as soon as we want to print from code only without showing a windows with a button on it. As an addition, this happens only if you want to print a QR-code as barcode. Standard EAN-codes seems to work. Below is a small piece of code which outputs the stuff to a bitmap instead of the printer (debugging reasons). DocumentClass doc = new DocumentClass(); if (doc.Open(templatePath)) { doc.GetObject("barcode1").Text = txtCompany.Text; doc.GetObject("barcode2").Text = txtName.Text; doc.Export(ExportType.bexBmp, testImagePath, 300); doc.Close(); } If this is called by a button click, it perfectly works. If this is called in Form.Show-event, it perfectly works. If this is called in Form.Load-event, it does NOT work. If this is called in a Form constructor, it does NOT work. If this is called somewhere else (without forms), it does NOT work. DocumentClass and related classes are COM-objects, so I guess the form setup/show process seems to do something which is not done without opening forms. I tried calling CoInitialize with a p/invoke, but it hadn't changed anything. Is there anyone out there willing and able to help me? Are there any alternatives which (also) MUST be able to print directly on our Brother printer? Thanks lot.

    Read the article

  • Trouble with ITextSharp - Converting XML to PDF

    - by AllenG
    Okay... I'm trying to use the most recent version of ITextSharp to turn an XML file into a PDF. It isn't working. The documentation on SourceForge doesn't seem to have kept up with the actual releases; the code in the provided example won't even compile under the newest version. Here is my test XML: <Remittance> <RemitHeader> <Payer>BlueCross</Payer> <Provider>Maricopa</Provider> <CheckDate>20100329</CheckDate> <CheckNumber>123456789</CheckNumber> </RemitHeader> <RemitDetail> <NPI>NPI_GOES_HERE</NPI> <Patient>Patient Name</Patient> <PCN>0034567</PCN> <DateOfService>20100315</DateOfService> <TotalCharge>125.57</TotalCharge> <TotalPaid>55.75</TotalPaid> <PatientShare>35</PatientShare> </RemitDetail> </Remittance> And here is the code I'm attempting to use to turn that into a PDF. Document doc = new Document(PageSize.LETTER, 36, 36, 36, 36); iTextSharp.text.pdf.PdfWriter.GetInstance(doc, new StreamWriter(fileOutputPath).BaseStream); doc.Open(); SimpleXMLParser.Parse((ISimpleXMLDocHandler)doc, new StreamReader(fileInputPath).BaseStream); doc.Close(); Now, I was pretty sure the (ISimpleXMLDocHandler)doc piece wasn't going to work, but I can't actually find anything in the source that both a) implements ISimleXMLDocHandler and b) will accept a standard XML document and parse it to PDF. FYI- I did try an older version which would compile using the example code from sourceforge, but it wasn't working either.

    Read the article

  • Tables created programmatically don't appear in WebBrowser control

    - by John Hall
    I'm creating HTML dynamically in a WebBrowser control. Most elements seems to appear correctly, with the exception of a table. My code is: var doc = webBrowser1.Document; var body = webBrowser1.Document.Body; body.AppendChild(webBrowser1.Document.CreateElement("hr")); var div = doc.CreateElement("DIV"); var table = doc.CreateElement("TABLE"); var row1 = doc.CreateElement("TR"); var cell1 = doc.CreateElement("TD"); cell1.InnerText = "Cell 1"; row1.AppendChild(cell1); var cell2 = doc.CreateElement("TD"); cell2.InnerText = "Cell 2"; row1.AppendChild(cell2); table.AppendChild(row1); div.AppendChild(table); body.AppendChild(div); body.AppendChild(webBrowser1.Document.CreateElement("hr")); The HTML tags are visible in the OuterHTML property of the body, but all that appears in the browser are the two horizontal rules. If I replace div.AppendChild(table); with div.InnerHtml = table.OuterHtml then everything appears as expected.

    Read the article

  • solr JOIN query

    - by Sfairas
    I need to run a JOIN query on a solr index. I've got two xmls that I have indexed, person.xml and subject.xml. Person: <doc> <field name="id">P39126</field> <field name="family">Smith</field> <field name="given">John</field> <field name="subject">S1276</field> <field name="subject">S1312</field> </doc> Subject: <doc> <field name="id">S1276</field> <field name="topic">Abnormalities, Human</field> </doc> I need to only display information from the person doc but each query should match fields in both person and subject. In the case the query matches only the subject doc I need to display all docs from the person that have a matching id. Is this possible to do without running two seperate queries? Something like a JOIN query would do the job. Any help?

    Read the article

  • Setting System.Drawing.Color through .NET COM Interop

    - by Maxim
    I am trying to use Aspose.Words library through COM Interop. There is one critical problem: I cannot set color. It is supposed to work by assigning to DocumentBuilder.Font.Color, but when I try to do it I get OLE error 0x80131509. My problem is pretty much like this one: http://bit.ly/cuvWfc update: Code Sample: from win32com.client import Dispatch Doc = Dispatch("Aspose.Words.Document") Builder = Dispatch("Aspose.Words.DocumentBuilder") Builder.Document = Doc print Builder.Font.Size print Builder.Font.Color Result: 12.0 Traceback (most recent call last): File "aaa.py", line 6, in <module> print Builder.Font.Color File "D:\Python26\lib\site-packages\win32com\client\dynamic.py", line 501, in __getattr__ ret = self._oleobj_.Invoke(retEntry.dispid,0,invoke_type,1) pywintypes.com_error: (-2146233079, 'OLE error 0x80131509', None, None) Using something like Font.Color = 0xff0000 fails with same error message While this code works ok: using Aspose.Words; namespace ConsoleApplication1 { class Program { static void Main(string[] args) { Document doc = new Document(); DocumentBuilder builder = new DocumentBuilder(doc); builder.Font.Color = System.Drawing.Color.Blue; builder.Write("aaa"); doc.Save("c:\\1.doc"); } } } So it looks like COM Interop problem.

    Read the article

  • count on LINQ union

    - by brechtvhb
    I'm having this link statement: List<UserGroup> domains = UserRepository.Instance.UserIsAdminOf(currentUser.User_ID); query = (from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentFrom equals uug.User_ID where domains.Contains(uug.UserGroup) select doc) .Union(from doc in _db.Repository<Document>() join uug in _db.Repository<User_UserGroup>() on doc.DocumentTo equals uug.User_ID where domains.Contains(uug.UserGroup) select doc); Running this statement doesn't cause any problems. But when I want to count the resultset the query suddenly runs quite slow. totalRecords = query.Count(); The result of this query is : SELECT COUNT([t5].[DocumentID]) FROM ( SELECT [t4].[DocumentID], [t4].[DocumentFrom], [t4].[DocumentTo] FROM ( SELECT [t0].[DocumentID], [t0].[DocumentFrom], [t0].[DocumentTo FROM [dbo].[Document] AS [t0] INNER JOIN [dbo].[User_UserGroup] AS [t1] ON [t0].[DocumentFrom] = [t1].[User_ID] WHERE ([t1].[UserGroupID] = 2) OR ([t1].[UserGroupID] = 3) OR ([t1].[UserGroupID] = 6) UNION SELECT [t2].[DocumentID], [t2].[DocumentFrom], [t2].[DocumentTo] FROM [dbo].[Document] AS [t2] INNER JOIN [dbo].[User_UserGroup] AS [t3] ON [t2].[DocumentTo] = [t3].[User_ID] WHERE ([t3].[UserGroupID] = 2) OR ([t3].[UserGroupID] = 3) OR ([t3].[UserGroupID] = 6) ) AS [t4] ) AS [t5] Can anyone help me to improve the speed of the count query? Thanks in advance!

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >