Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 336/367 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • ASP.NET Memory Usage in IIS is FAR greater than in DevEnv. Is this normal?

    - by Tom
    Greetings! I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process. There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS. The issue: When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly. When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page. Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic. Is this normal? If I bump the server RAM up to 4GB, will that be enough? Will multiple users require even more memory? Could the culprit be AJAX or the list of objects? Thanks for any insight you can provide.

    Read the article

  • Running an existing LINQ query against a dynamic object (DataTable like)

    - by TomTom
    Hello, I am working on a generic OData provider to go against a custom data provider that we have here. Thsi is fully dynamic in that I query the data provider for the table it knows. I have a basic storage structure in place so far based on the OData sample code. My problem is: OData supports queries and expects me to hand in an IQueryable implementation. On the lowe rside, I dont have any query support. Not a joke - the provider returns tables and the WHERE clause is not supported. Performance is not an issue here - the tables are small. It is ok to sort them in the OData provider. My main problem is this. I submit a SQL statement to get out the data of a table. The result is some sort of ADO.NET data reader here. I need to expose an IQueryable implementation for this data to potentially allow later filtering. Any ide ahow to best touch that? .NET 3.5 only (no 4.0 planned for some time). I was seriously thinking of creating dynamic DTO classes for every table (emitting bytecode) so I can use standard LINQ. Right now I am using a dictionary per entry (not too efficient) but I see no real way to filter / sort based on them.

    Read the article

  • How to prevent linq-to-sql designer undo my changing

    - by anonim.developer
    Dear All, Thanks for your attention in advance, I’ve met an issue with LINQ-2-SQL designer in VS 2008 SP1 which has made me CRAZY. I use Linq2sql as my DAL. It seems Linq2sql speeds up coding in the first step but lots of issues arise in feature specifically with table or object inheritance. In this case I have a class Entity that all other entity classes generated by Linq2sql designer inherit from. public abstract class Entity { public virtual Guid ID { get; protected set; } } public partial class User : monius.Data.Entity { } And the following generated by L2S designer (DataModel.designer.cs) [Column(Storage = "_ID", AutoSync = AutoSync.OnInsert, DbType = "UniqueIdentifier NOT NULL", IsPrimaryKey = true, IsDbGenerated = true, UpdateCheck = UpdateCheck.Never)] [DataMember(Order = 1)] public System.Guid ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } When I compile the code VS warns me that Warning 1 'User.ID' hides inherited member 'Entity.ID'. To make the current member override that mplementation, add the override keyword. Otherwise add the new keyword. That warning is obvious and I have to change the code generated by L2S designer (DataModel.designer.cs) to […] public override System.Guid ID { … protected set … } And the code compiled with no error or warning and everyone is happy. But that is not the end of story. As soon as I made changes to entities of the diagram (dbml) or even I open dbml file to view it, any change manually I made to designer has been vanished and POOF! Redo AGAIN. That is a painful job. Now I wonder if there is a way to force L2S designer not changing portions of auto-generated code. I’ll be appreciated if someone kindly helps me with this issue.

    Read the article

  • How to Make a DVCS Completely Interoperable with Subversion?

    - by David M
    What architectural changes would a DVCS need to be completely interoperable with Subversion? Many DVCSs have some kind of bidirectional interface with Subversion, but there are limitations and caveats. For instance, git-svn can create a repository that mirrors Subversion, and changes to that repo can be sent back to Subversion via 'dcommit'. But the git-svn manpage explicitly cautions against making clones of that repository, so essentially, it's a Subversion working copy that you can use git commands on. Bazaar has a bidirectional Subversion capability too, but its documentation notes that Subversion properties aren't supported at all. Here's the end that I'm pursuing. I want a Subversion repository and a DVCS repository that, in the steady state, have identical content. When something is changed on one, it's automatically mirrored to the other. Subversion users interact with the Subversion repository normally. DVCS users clone the DVCS repository, pull changes from it, and push changes back to it. Most importantly, they don't need to know that this special DVCS repository is associated with a Subversion repository. It would probably be nifty if any clone of the special repository is itself a special repository and could commit directly to Subversion, but it might be sufficient if only the special repository directly interacts with Subversion. I think that's what mostly needed is to improve the bidirectional capability so that changes to Subversion properties are translated to changes in the DVCS repository. Some changes in the DVCS repository would be translated to changes to Subversion properties. Or is the answer to create a new capability in Subversion that interacts with a DVCS repository, using the DVCS repository as just a special storage layer such as fsfs or bdb? If there's not a direct mapping between the things that Subversion and a DVCS regard as having versions, does that imply that there's always going to be some activity that cannot be recorded properly on one or the other?

    Read the article

  • Switching textstorage of NSTextViews back and forth

    - by Jakob Dam Jensen
    I'm trying to make a feature in a product which gives the user the ability to split a textview into two. The way this is done is by removing the textview from it's superview, making a NSSplitView and adding the textview as well as a new NSTextView instance to this splitview. Lastly I make these two textviews share the same textstorage in order to make them share the same content. It works great. But the problem is when I want to make one of the two textviews change textstorage. The replaceTextStorage method in NSLayoutManager causes both NSTextView to change textStorage. The API documentation states: replaceTextStorage: All NSLayoutManager objects sharing the original NSTextStorage object then share the new one. This method makes all the adjustments necessary to keep these relationships intact, unlike setTextStorage:. So it makes sense that it would do this. But the question is how do I make it possible to have two (or more) textviews first share the same storage and after that having them using their own? I've tried replacing the layoutManager and even making new instances of NSTextViews but no luck... Any suggestions?

    Read the article

  • SVN Serve, Missing a Directory

    - by Ryan Smith
    I'm sure this is an asinine question, and I blame myself for not fully understanding how the SVNSERVE process works. I have an SVN repo, but it needs to be moved to a server within a clients cloud. I did this a while back and ran into the issue of the SVNSERVE.exe process not getting set to the right directory. I have the SVNSERVE.exe process running as a windows service and pointing to the right directory. There are two other repos there that are serving out fine in the same directory. I copied out the new directory just like I did with the others, but I'm getting the error "No repository found". I thought that SVNSERVE just looked at that directory and served out the repositories that were there, but I have had a hard time finding more information about that. I thought it was a Windows permission problem, but I set the whole folder to be full control to EVERYONE, so that's not it. I feel horrible I didn't fully understand this problem the first time I fought it, but it's late on a Sunday night and clients are yelling. Anyone know what I'm missing? Thanks. EDIT: It's specific to the repository. I tested the same process with some of the other repos we have on our server and when I copied them up, they worked just as expected. This bug is breaking me and I wish I could provide more details, but that's all I know. I'm going to try to do an SVN Dump instead of an XCopy and see how that goes. I'll let you know.

    Read the article

  • AJAX Autosave

    - by antony.trupe
    What's the best javascript library, or plugin or extension to a library, that has implemented autosaving functionality? The specific need is to be able to 'save' a data grid. Think gmail and Google Documents' autosave. I don't want to reinvent the wheel if its already been invented. I'm looking for an existing implementation of the magical autoSave() function. Auto-Saving:pushing to server code that saves to persistent storage, usually a DB. The server code framework is outside the scope of this question. Note that I'm not looking for an Ajax library, but a library/framework a level higher: interacts with the form itself. daemach introduced an implementation on top of jQuery @ http://ideamill.synaptrixgroup.com/?p=3. I'm not convinced it meets the lightweight and well engineered criteria though. Criteria stable, lightweight, well engineered saves onChange and/or onBlur saves no more frequently then a given number of milliseconds handles multiple updates happening at the same time doesn't save if no change has occurred since last save saves to different urls per input class Updates I've stabilized a solution. See my answer below for links.

    Read the article

  • pattern for the following condition in java

    - by zahir hussain
    hi i want to know how to write pattern.. for example : the word is "AboutGoogle AdWords Drive traffic and customers to your site. Pay through Cheque, Net Banking or Credit Card. Google Toolbar Add a search box to your browser. Google SMS To find out local information simply SMS to 54664. Gmail Free email with 7.2GB storage and less spam. Try Gmail today. Our ProductsHelp Help with Google Search, Services and ProductsGoogle Web Search Features Translation, I'm Feeling Lucky, CachedGoogle Services & Tools Toolbar, Google Web APIs, ButtonsGoogle Labs Ideas, Demos, ExperimentsFor Site OwnersAdvertising AdWords, AdSenseBusiness Solutions Google Search Appliance, Google Mini, WebSearchWebmaster Central One-stop shop for comprehensive info about how Google crawls and indexes websitesSubmit your content to Google Add your site, Google SitemapsOur CompanyPress Center News, Images, ZeitgeistJobs at Google Openings, Perks, CultureCorporate Info Company overview, Philosophy, Diversity, AddressesInvestor Relations Financial info, Corporate governanceMore GoogleContact Us FAQs, Feedback, NewsletterGoogle Logos Official Logos, Holiday Logos, Fan LogosGoogle Blog Insights to Google products and cultureGoogle Store Pens, Shirts, Lava lamps©2010 Google - Privacy Policy - Terms of Service" I have to search some word... for example "google insights" so how to write the code in java... i just write small code... check my code and answer my question... that code only use for find the search word, where is that. but i need to display some words front of search word and display some words rear of search workd... similar to google search... my code is Pattern p = Pattern.compile("(?i)(.*?)"+search+""); Matcher m = p.matcher(full); String title=""; while (m.find() == true) { title=m.group(1); System.out.println(title); } the full is orignal content, search s search word... thanks and advance

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

  • How to setup Continuous Integration and Continuous Deployment for Django projects?

    - by ycseattle
    Hello, I am researching about how to set up CI and continuous deployment for a small team project for a Django based web application. Here are needs: Developer check in the code into a hosted SVN server (unfuddle.com) A CI server detects new checkin, check out the source, build, run functional tests. If tests all passed, deploy the code to the webserver on Amazon EC2. For now, the CI server is also responsible to run the functional tests. I figured out that I can use Husdon as the CI server, use Selenium to run functional tests, and use Fabric to deploy the build to remote web server in Amazon cloud. I am new to Django development and not very familiar with opensource tools. My questions are: I can find some information to integrate hudson with selenium, but I couldn't find much information on how to integrate Fabric to Hudson as well. Is this setup viable? Do you see problems? How do I integrate and deploy database changes? Most likely in the early stage we will change database schema very often with code changes. I used to use Visual Studio and the database project made it very simple to deploy. I wonder if there is "established, well-supported" way to do that. Thanks!!

    Read the article

  • Moving inserted container element if possible

    - by doublep
    I'm trying to achieve the following optimization in my container library: when inserting an lvalue-referenced element, copy it to internal storage; but when inserting rvalue-referenced element, move it if supported. The optimization is supposed to be useful e.g. if contained element type is something like std::vector, where moving if possible would give substantial speedup. However, so far I was unable to devise any working scheme for this. My container is quite complicated, so I can't just duplicate insert() code several times: it is large. I want to keep all "real" code in some inner helper, say do_insert() (may be templated) and various insert()-like functions would just call that with different arguments. My best bet code for this (a prototype, of course, without doing anything real): #include <iostream> #include <utility> struct element { element () { }; element (element&&) { std::cerr << "moving\n"; } }; struct container { void insert (const element& value) { do_insert (value); } void insert (element&& value) { do_insert (std::move (value)); } private: template <typename Arg> void do_insert (Arg arg) { element x (arg); } }; int main () { { // Shouldn't move. container c; element x; c.insert (x); } { // Should move. container c; c.insert (element ()); } } However, this doesn't work at least with GCC 4.4 and 4.5: it never prints "moving" on stderr. Or is what I want impossible to achieve and that's why emplace()-like functions exist in the first place?

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • Switch/case without break inside DllMain

    - by Sherwood Hu
    I have a Dllmain that allocates Thread local storage when a thread attaches to this DLL. Code as below: BOOL APIENTRY DllMain(HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved) { LPVOID lpvData; BOOL fIgnore; switch (ul_reason_for_call) { case DLL_PROCESS_ATTACH: onProcessAttachDLL(); // Allocate a TLS index. if ((dwTlsIndex = TlsAlloc()) == TLS_OUT_OF_INDEXES) return FALSE; // how can it jump to next case??? case DLL_THREAD_ATTACH: // Initialize the TLS index for this thread. lpvData = (LPVOID) LocalAlloc(LPTR, MAX_BUFFER_SIZE); if (lpvData != NULL) fIgnore = TlsSetValue(dwTlsIndex, lpvData); break; ... } I know that for the main thread, the DLL_THREAD_ATTACH is not entered, as per Microsoft Documentation. However, the above code worked. I am using VC2005. When I entered the debugger, I saw that after it entered DLL_THREAD_ATTACH case when ul_reason_for_call = 1! How can that happen? If I add `break' at the end of DLL_PROCESS_ATTACH block, the DLL failed to work. How can this happen?

    Read the article

  • EC2 persistence of machine

    - by Seagull
    I want to 'persist' my Amazon EC2 images. My scenario: I have a range of Windows and Linux machines Some machines are EBS backed, whereas others are S3 backed. I need to be able to persist a machine (put it to sleep), preferably keeping all settings active I had them when the machine was running. I need to be able to quickly wake up a machine from sleep [Ideally with an SLA of less than 2 min to turn-on, if such an SLA is available with Amazon]. Here's the stuff that confuses me: AWS allows me to put EBS backed machines to sleep, but not S3 backed. I believe I can put S3 machines into some sort of persistence mode. But this involves shutting down the machine, writing it to S3 storage and then recovering from there (not a real sleep mode, but at least I don't continue to get billed for CPU). S3 backing seems to take a long time to either writing a machine to disk, or to recover (turn on a machine). I can't tell immediately which machines are EBS backed and which are S3 backed? It seems like I can instantiate either type, but it's not immediately clear how Amazon decided whether a given machine should be EBS or S3 backed. Advice?

    Read the article

  • Sparse (Pseudo) Infinite Grid Data Structure for Web Game

    - by Ming
    I'm considering trying to make a game that takes place on an essentially infinite grid. The grid is very sparse. Certain small regions of relatively high density. Relatively few isolated nonempty cells. The amount of the grid in use is too large to implement naively but probably smallish by "big data" standards (I'm not trying to map the Internet or anything like that) This needs to be easy to persist. Here are the operations I may want to perform (reasonably efficiently) on this grid: Ask for some small rectangular region of cells and all their contents (a player's current neighborhood) Set individual cells or blit small regions (the player is making a move) Ask for the rough shape or outline/silhouette of some larger rectangular regions (a world map or region preview) Find some regions with approximately a given density (player spawning location) Approximate shortest path through gaps of at most some small constant empty spaces per hop (it's OK to be a bad approximation often, but not OK to keep heading the wrong direction searching) Approximate convex hull for a region Here's the catch: I want to do this in a web app. That is, I would prefer to use existing data storage (perhaps in the form of a relational database) and relatively little external dependency (preferably avoiding the need for a persistent process). Guys, what advice can you give me on actually implementing this? How would you do this if the web-app restrictions weren't in place? How would you modify that if they were? Thanks a lot, everyone!

    Read the article

  • Can someone please debug this Windows Azure Application?

    - by Vimvq1987
    Here's the myTODO project from codeplex: myTODO project I added any necessary libraries, added storage, changed obsolete types/methods, but everything went wrong when I debug it. An exception was thrown here (in TableStorage.cs): public IEnumerable<TElement> ExecuteWithRetries(RetryPolicy retry) { IEnumerable<TElement> ret = null; if (retry == null) { throw new ArgumentNullException("retry"); } retry(() => { try { ret = _query.Execute(); } catch (InvalidOperationException e) { if (TableStorageHelpers.CanBeRetried(e)) { throw new TableRetryWrapperException(e); } throw; } }); return ret; } I'm using Visual Studio 2008, SQL server 2008, Windows Azure SDK v1.1. Can anyone please debug this project for me, or suggest me someway to get it working. This request is urgent. Any helps are much appreciated. PS: If you can't download these file, please let me know, I'll upload to another hosts.

    Read the article

  • Display pdf file inline in Rails app

    - by Martas
    Hi, I have a pdf file attachment saved in the cloud. The file is attached using attachment_fu. All I do to display it in the view is: <%= image_tag @model.pdf_attachment.public_filename %> When I load the page with this code in the browser, it does what I want: it displays the attached pdf file. But only on Mac. On Windows, browsers will display a broken image placeholder. Chrome's Developer Tools report: "Resource interpreted as image but transferred with MIME type application/pdf." I also tried sending the file from controller: in PdfAttachmentController: def send_pdf_attachment pdf_attachment = PdfAttachment.find params[:id] send_file pdf_attachment.public_filename, :type => pdf_attachment.content_type, :file_name => pdf_attachment.filename, :disposition => 'inline' end in routes.rb: map.send_pdf_attachment '/pdf_attachments/send_pdf_attachment/:id', :controller => 'pdf_attachments', :action => 'send_pdf_attachment' and in the view: <%= send_pdf_attachment_path @model.pdf_attachment %> or <%= image_tag( send_pdf_attachment_path @model.pdf_attachment ) %> And that doesn't display the file on Mac (I didn't try on Windows), it displays the path: pdf_attachments/send_pdf_attachment/35 So, my question is: what do I do to properly display a pdf file inline? Thanks martin

    Read the article

  • Is there a limit for the number of files in a directory on an SD card?

    - by jamesh
    I have a project written for Android devices. It generates a large number of files, each day. These are all text files and images. The app uses a database to reference these files. The app is supposed to clear up these files after a little use (perhaps after a few days), but this process may or may not be working. This is not the subject of this question. Due to a historic accident, the organization of the files are somewhat naive: everything is in the same directory; a .hidden directory which contains a zero byte .nomedia file to prevent the MediaScanner indexing it. Today, I am seeing an error reported: java.io.IOException: Cannot create: /sdcard/.hidden/file-4200.html at java.io.File.createNewFile(File.java:1263) Regarding the sdcard, I see it has plenty of storage left, but counting $ cd /Volumes/NO_NAME/.hidden $ ls | wc -w 9058 Deleting a number of files seems to have allowed the file creation for today to proceed. Regrettably, I did not try touching a new file to try and reproduce the error on a commandline; I also deleted several hundred files rather than a handful. However, my question is: are there hard limits on filesize or number of files in a directory? am I even on the right track here? Nota Bene: The SD card is as-is - i.e. I haven't formatted it, so I would guess it would be a FAT-* format. The FAT-32 format has hard limits of filesize of 2GB (well above the filesizes I am dealing with) and a limit of number of files in the root directory. I am definitely not writing files in the root directory.

    Read the article

  • SQLce DAL Linq to Sql or EntityFramework

    - by bretddog
    Hi, I'm learning databases, using SqlCe, and need business object to database mapping. Currently I try to decide if to use Linq to Sql, or EntityFramework. (I understand a bit L2S, but haven't familiarized with EF yet) The program will only be debeloped and used by myself, so I have good control of the priorities: I don't need to consider potential change of database type or data storage type, as I'm quite certain SQLce will stay sufficient. I DO expect continued development and changes to the data scheme while the program is in active use; change business object properties (Hence database columns), and possibly overall table scheme. So old data must be transported to new scheme. I also want to keep a decent degree of layer separation DAL/BLL, although this may not be necessary, it is good for me to learn these principles. My question is: With these priorities, would I have any benefit by choosing either Linq2Sql vs. EntityFramwork? (and please explain why) Btw, the project involves very simple table scheme with only 4-5 tables and very simple relations. Thanks!

    Read the article

  • Compile time float packing/punning

    - by detly
    I'm writing C for the PIC32MX, compiled with Microchip's PIC32 C compiler (based on GCC 3.4). My problem is this: I have some reprogrammable numeric data that is stored either on EEPROM or in the program flash of the chip. This means that when I want to store a float, I have to do some type punning: typedef union { int intval; float floatval; } IntFloat; unsigned int float_as_int(float fval) { IntFloat intf; intf.floatval = fval; return intf.intval; } // Stores an int of data in whatever storage we're using void StoreInt(unsigned int data, unsigned int address); void StoreFPVal(float data, unsigned int address) { StoreInt(float_as_int(data), address); } I also include default values as an array of compile time constants. For (unsigned) integer values this is trivial, I just use the integer literal. For floats, though, I have to use this Python snippet to convert them to their word representation to include them in the array: import struct hex(struct.unpack("I", struct.pack("f", float_value))[0]) ...and so my array of defaults has these indecipherable values like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 0x3C83126F, // Some default float value, 0.005 } (These actually take the form of X macro constructs, but that doesn't make a difference here.) Commenting is nice, but is there a better way? It's be great to be able to do something like: const unsigned int DEFAULTS[] = { 0x00000001, // Some default integer value, 1 COMPILE_TIME_CONVERT(0.005), // Some default float value, 0.005 } ...but I'm completely at a loss, and I don't even know if such a thing is possible. Notes Obviously "no, it isn't possible" is an acceptable answer if true. I'm not overly concerned about portability, so implementation defined behaviour is fine, undefined behaviour is not (I have the IDB appendix sitting in front of me). As fas as I'm aware, this needs to be a compile time conversion, since DEFAULTS is in the global scope. Please correct me if I'm wrong about this.

    Read the article

  • Version Control: multiple version hell, file synchronization

    - by SigTerm
    Hello. I would like to know how you normally deal with this situation: I have a set of utility functions. Say..5..10 files. And technically they are static library, cross-platform - SConscript/SConstruct plus Visual Studio project (not solution). Those utility functions are used in multiple small projects (15+, number increases over time). Each project has a copy of a few files or of an entire library, not a link into one central place. Sometimes project uses one file, two files, some use everything. Normally, utility functions are included as a copy of every file and SConscript/SConstruct or Visual Studio Project (depending on the situation). Each project has a separate git repository. Sometimes one project is derived from other, sometimes it isn't. You work on every one of them, in random order. There are no other people (to make things simpler) The problem arises when while working on one project you modify those utility function files. Because each project has a copy of file, this introduces new version, which leads to the mess when you try later (week later, for example) to guess which version has a most complete functionality (i.e. you added a function to a.cpp in one project, and added another function to a.cpp in another project, which created a version fork) How would you handle this situation to avoid "version hell"? One way I can think of is using symbolic links/hard links, but it isn't perfect - if you delete one central storage, it will all go to hell. And hard links won't work on dual-boot system (although symbolic links will). It looks like what I need is something like advanced git repository, where code for the project is stored in one local repository, but is synchronized with multiple external repositories. But I'm not sure how to do it or if it is possible to do this with git. So, what do you think?

    Read the article

  • Installing PIL on Cygwin

    - by Dustin
    I've been struggling all morning to get PIL installed on Cygwin. The errors I get are not consistent with common errors I find using Google. Perhaps a linux guru can see an obvious problem in this output: $ python setup.py install running install running build running build_py running build_ext building '_imaging' extension gcc -fno-strict-aliasing -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -DHAVE_LIBZ -I/usr/include/freetype2 -IlibImaging -I/usr/include -I/usr/include/python2.5 -c _imaging.c -o build/temp.cygwin-1.7.2-i686-2.5/_imaging.o In file included from /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/syslimits.h:7, from /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/limits.h:11, from /usr/include/python2.5/Python.h:18, from _imaging.c:75: /usr/lib/gcc/i686-pc-cygwin/3.4.4/include/limits.h:122:61: limits.h: No such file or directory In file included from _imaging.c:75: /usr/include/python2.5/Python.h:32:19: stdio.h: No such file or directory /usr/include/python2.5/Python.h:34:5: #error "Python.h requires that stdio.h define NULL." /usr/include/python2.5/Python.h:37:20: string.h: No such file or directory /usr/include/python2.5/Python.h:39:19: errno.h: No such file or directory /usr/include/python2.5/Python.h:41:20: stdlib.h: No such file or directory /usr/include/python2.5/Python.h:43:20: unistd.h: No such file or directory /usr/include/python2.5/Python.h:55:20: assert.h: No such file or directory In file included from /usr/include/python2.5/Python.h:57, from _imaging.c:75: /usr/include/python2.5/pyport.h:7:20: stdint.h: No such file or directory In file included from /usr/include/python2.5/Python.h:57, from _imaging.c:75: /usr/include/python2.5/pyport.h:89: error: parse error before "Py_uintptr_t" /usr/include/python2.5/pyport.h:89: warning: type defaults to `int' in declaration of `Py_uintptr_t' /usr/include/python2.5/pyport.h:89: warning: data definition has no type or storage class /usr/include/python2.5/pyport.h:90: error: parse error before "Py_intptr_t" /usr/include/python2.5/pyport.h:90: warning: type defaults to `int' in declaration of `Py_intptr_t' ... more lines like this

    Read the article

  • How to achieve high availability?

    - by tanyehzheng
    My boss wants to have a system that takes into concern of continent wide catastrophic event. He wants to have two servers in US and two servers in Asia (1 login server and 1 worker server in each continent). In the event that earthquake breaks the connection between the two continents, both should work alone. When the connection is revived, they should sync each other back to normal. External cloud system not allowed as he has no confidence. The system should take into account of scalability which means addition of new servers should be easy to configure. The servers should be load balanced. The connection between the servers should be very secure(encrypted and send through SSL although SSL takes care of encryption). The system should let one and only one user log in with one account. (beware of latency between continent and two users sharing account may reach both login server at the same time) Please help. I'm already at the end of my wit. Thank you in advance.

    Read the article

  • How to access webbrowser object on this code? C++

    - by extintor
    I found this example http://www.mvps.org/user32/webhost.cab that host an Internet Explorer WebBrowser object, and it uses this code to access the object void webhostwnd::CreateEmbeddedWebControl(void) { OleCreate(CLSID_WebBrowser,IID_IOleObject,OLERENDER_DRAW,0,&site,&storage,(void**)&mpWebObject); mpWebObject->SetHostNames(L"Web Host",L"Web View"); // I have no idea why this is necessary. remark it out and everything works perfectly. OleSetContainedObject(mpWebObject,TRUE); RECT rect; GetClientRect(hwnd,&rect); mpWebObject->DoVerb(OLEIVERB_SHOW,NULL,&site,-1,hwnd,&rect); IWebBrowser2* iBrowser; mpWebObject->QueryInterface(IID_IWebBrowser2,(void**)&iBrowser); VARIANT vURL; vURL.vt = VT_BSTR; vURL.bstrVal = SysAllocString(L"http://google.com"); VARIANT ve1, ve2, ve3, ve4; ve1.vt = VT_EMPTY; ve2.vt = VT_EMPTY; ve3.vt = VT_EMPTY; ve4.vt = VT_EMPTY; iBrowser->put_Left(0); iBrowser->put_Top(0); iBrowser->put_Width(rect.right); iBrowser->put_Height(rect.bottom); iBrowser->Navigate2(&vURL, &ve1, &ve2, &ve3, &ve4); VariantClear(&vURL); iBrowser->Release(); } I don't have much experience with cpp, I want to know how to access that same ie object (to use Navigate2 for example) from a button or something like that. How could I achieve this?

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >