Search Results

Search found 9156 results on 367 pages for 'cloud storage'.

Page 198/367 | < Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >

  • Having trouble coming up with a good architecture for a client/server application

    - by rmw1985
    I am writing a remote backup service meant to support 1000+ users. It is going to use librsync to store reverse diffs (like rdiff-backup) and make data transfer efficient. My trouble is that I do not know the "best" way to implement the client/server model. I have thought of doing it like rsync/rdiff-backup do it by having the client open an SSH connection and running a server executable and communicating across pipes. Another alternative would be to write a server which would handle authentication and communicate with the client via SSL. The reason I have thought of this is that there is "state" information like how many backup jobs are setup, etc. that must be maintained. Another alternative that I have thought about is running a "web service" using Pylons or Django to handle the authentication, but I do not know how to bridge that the the "storage" side. Since I am using librsync, I cannot use "dumb" storage. Is there a way to pipe data through Pylons or Django to a server side handler that would do the rsync calculation? This seems to me like maybe a dumb question but I am sort of lost. Any tips or suggestions from more experienced developers would be extremely helpful.

    Read the article

  • Windows Azure: asp.net <appsetting> for ChatHttpHandler not found

    - by veda
    I have a problem in setting the chartHttpHandler in web.config for windows azure Initially I added a chartHttpHandler on my web.config file <remove name="ChartImageHandler"/> <add name="ChartImageHandler" preCondition="integratedMode" verb="GET,HEAD" path="ChartImg.axd" type="System.Web.UI.DataVisualization.Charting.ChartHttpHandler, System.Web.DataVisualization, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35"/> under system.webServer section Then I got an error stating Invalid temp directory in chart handler configuration [c:\TempImageFiles\]. Then I found that I should change in From <add key="ChartImageHandler" value="storage=file;timeout=20;dir=c:\TempImageFiles\;" /> To <add key="ChartImageHandler" value="storage=file;timeout=20;" /> I am not able to find this section in my web.config file. So I tried to add it under section but it gave me error that it is an invalid child Likewise, I tried adding it under section, it too gave the same error... So, I added sepeartely, Then I am not able to see anything.. The webpage is just blank... What should I have to do.. Can anyone tell me "how to solve it"

    Read the article

  • Android: Is it possible to have multiple styles inside a TextView?

    - by Legend
    I was wondering if its possible to set multiple styles for different pieces of text inside a TextView. For instance, I am setting the text as follows: descbox.setText(line1 + "\n" + line2 + "\n" + word1 + "\t" + word2 + "\t" + word3); Now, is it possible to have a different style for each text element? I mean bold for line1, normal for word1 and so on... I found this http://developer.android.com/guide/appendix/faq/commontasks.html#selectingtext: // Get our EditText object. EditText vw = (EditText)findViewById(R.id.text); // Set the EditText's text. vw.setText("Italic, highlighted, bold."); // If this were just a TextView, we could do: // vw.setText("Italic, highlighted, bold.", TextView.BufferType.SPANNABLE); // to force it to use Spannable storage so styles can be attached. // Or we could specify that in the XML. // Get the EditText's internal text storage Spannable str = vw.getText(); // Create our span sections, and assign a format to each. str.setSpan(new StyleSpan(android.graphics.Typeface.ITALIC), 0, 7, Spannable.SPAN_EXCLUSIVE_EXCLUSIVE); str.setSpan(new BackgroundColorSpan(0xFFFFFF00), 8, 19, Spannable.SPAN_EXCLUSIVE_EXCLUSIVE); str.setSpan(new StyleSpan(android.graphics.Typeface.BOLD), 21, str.length() - 1, Spannable.SPAN_EXCLUSIVE_EXCLUSIVE); But it uses position numbers inside the text. Is there a cleaner way to do this?

    Read the article

  • REST API error return good practices

    - by Remus Rusanu
    I'm looking for guidance on good practices when it comes to return errors from a REST API. I'm working on a new API so I can take it any direction right now. My content type is XML at the moment, but I plan to support JSON in future. I am now adding some error cases, like for instance a client attempts to add a new resource but has exceeded his storage quota. I am already handling certain error cases with HTTP status codes (401 for authentication, 403 for authorization and 404 for plain bad request URIs). I looked over the blessed HTTP error codes but none of the 400-417 range seems right to report application specific errors. So at first I was tempted to return my application error with 200 OK and a specific XML payload (ie. Pay us more and you'll get the storage you need!) but I stopped to think about it and it seems to soapy (/shrug in horror). Besides it feels like I'm splitting the error responses into distinct cases, as some are http status code driven and other are content driven. So what is the SO crowd recommendation? Good practices (please explain why!) and also, from a client pov, what kind of error handling in the REST API makes life easier for the client code?

    Read the article

  • .NET out of memory troubleshooting

    - by bushman
    After reading a few enlightening articles about memory in the .NET technology, Out of Memory does not refer to physical memory, 597499. I thought I understood why a C# app would throw an out of memory exception -- until I started experimenting with two servers-- both are having 2.5 gigs of ram, windows server 2003 and identical programs running. The only significant difference between the two being one has 7% hard drive storage left and the other more than 50%. The server with 7% storage space left is consistently throwing an out of memory while the other is performing consistently well. My app is a C# web application that process' hundreds of MBs of String object. Why would this difference happen seeing that the most likely reason for the out of memory issue is out of contiguous virtual address space -- What solutions do you guys propose -- and what do you say about the following 1. turn on the 3gb switch to increase the virtual address space -- 2. instead of using one giant string object, break it up into smaller pieces and collect it in a jagged array (here I have to find a way to return to the caller in some other way as right now, the return type is a string) thanks SO

    Read the article

  • CouchDB, HDFS, HBase or which is right for my situation?

    - by Lucas
    Hello all, This question is regarding data storage systems such as CouchDB, HDFS and HBase, specifically, which is right. I am looking at making a simple and customized Document Management System for my organization. Basically, we need the ability to store some Word Documents, PDFs and other similar files. I also want to store metadata about these files (e.g., Author, Dates, etc). Usage permissions would also be handy, but that can probably be built using meta-data. I would also need the ability to full-text index. The ability to version, while not required would be extremely useful. I would like the ability to simply add hardware to expand the resources of the system and the system must support Network Attached Storage over the CIFS or NFS protocol(s). I have read about CouchDB, HDFS and HBase. My preferred programming language is C# as all of my end-users will be running Windows machines and I will want to make both web and winforms client implementations. My question is which solution best fits my needs? Based on my research it appears that CouchDB (utilizing the CouchDB-Lounge and CouchDB-Lucene) perfectly fits my needs. However, I am worried that since I have worked with CouchDB that I might be overlooking something useful for my needs in HDFS or HBase or something similar due to a bias. Any and all opinions are welcome as I am looking for the community input as I really do not want to make the wrong choice at the start of my project. Please ask if you need more information. I thank you all for your time, input and assistance.

    Read the article

  • Django 'ImproperlyConfigured' error after deployment on google app engine

    - by oreon
    Hello, I'm currently trying to get my first django project running on Google App Engine. I followed the instructions given here http://www.allbuttonspressed.com/projects/djangoappengine as best I could. Unfortunately I have run into some issues. Locally everything runs fine, no problems. I then tried to deploy my project to the cloud. This is where I'm totally stuck. I always receive 500 Server Errors coupled with google.appengine.runtime.DeadlineExceededError's. Every now and then I get the following error message in my logs, which I think is the root of the problem : <class 'django.core.exceptions.ImproperlyConfigured'>: ImportError projectyalanda.pricecompare: No module named projectyalanda.pricecompare Obviously something is wrong in the way I reference my django app. Why this is only an issue in the cloud is a mystery to me. The interesting part in the settings.py file is setup as following: INSTALLED_APPS = ( 'djangotoolbox', # 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'projectyalanda.pricecompare', ) I absolutely can't figure out why django/appengine wouldn't be able to find the module, especially since everything works perfectly locally. So where else can I look? The local folder structure is of course also correct as automatically done by django, so maybe something is messed up during deployment? How would I be able to find out? Please help me ;-) Thanks

    Read the article

  • Cassandra hot keyspace structure change

    - by Pierre
    Hello. I'm currently running a 12-node Cassandra cluster storing 4TB of data, with a replication factor set to 3. For the needs of an application update, we need to change the configuration of our keyspace, and we'd like to avoid any downtime if possible. I read on a mailing list that the best way to do it is to: Kill cassandra process on one server of the cluster Start it again, wait for the commit log to be written on the disk, and kill it again Make the modifications in the storage.xml file Rename or delete the files in the data directories according to the changes we made Start cassandra Goto 1 with next server on the list My questions would be: Did I understand the process well? Is there any risk of data corruption? During the process, there will be servers with different versions of the storage.xml file in the same cluser, same keyspace. Is it a problem? Same question as above if we not only add, rename and remove ColumnFamilies, but if we change the CompareWith parameter / transform an existing column family into a super one. Or do we need to change the name? Thank you for your answers. It's the first time I'll do this, and I'm a little bit scared.

    Read the article

  • Compiled Haskell libraries with FFI imports are invalid when imported into GHCI

    - by John Millikin
    I am using GHC 6.12.1, in Ubuntu 10.04 When I try to use the FFI syntax for static storage, only modules running in interpreted mode (ie GHCI) work properly. Compiled modules have invalid pointers, and do not work. I'd like to know whether anybody can reproduce the problem, whether this an error in my code or GHC, and (if the latter) whether it's a known issue. I'm using sys_siglist because it's present in a standard library on my system, but I don't believe the actual storage used matters (I discovered this while writing a binding to libidn). If it helps, sys_siglist is defined in <signal.h> as: extern __const char *__const sys_siglist[_NSIG]; I thought this type might be the problem, so I also tried wrapping it in a plain C procedure: #include<stdio.h> const char **test_ffi_import() { printf("C think sys_siglist = %X\n", sys_siglist); return sys_siglist; } However, importing that doesn't change the result, and the printf() call prints the same pointer value as show siglist_a. My suspicion is that it's something to do with static and dynamic library loading. Update: somebody in #haskell suggested this might be 64-bit specific; if anybody tries to reproduce it, can you mention your architecture and whether it worked in a comment? Code as follows: -- A.hs {-# LANGUAGE ForeignFunctionInterface #-} module A where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_a :: Ptr CString -- -- B.hs {-# LANGUAGE ForeignFunctionInterface #-} module B where import Foreign import Foreign.C foreign import ccall "&sys_siglist" siglist_b :: Ptr CString -- -- Main.hs {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign import Foreign.C import A import B foreign import ccall "&sys_siglist" siglist_main :: Ptr CString main = do putStrLn $ "siglist_a = " ++ show siglist_a putStrLn $ "siglist_b = " ++ show siglist_b putStrLn $ "siglist_main = " ++ show siglist_main peekSiglist "a " siglist_a peekSiglist "b " siglist_b peekSiglist "main" siglist_main peekSiglist name siglist = do ptr <- peekElemOff siglist 2 str <- maybePeek peekCString ptr putStrLn $ "siglist_" ++ name ++ "[2] = " ++ show str I would expect something like this output, where all pointer values identical and valid: $ runhaskell Main.hs siglist_a = 0x00007f53a948fe00 siglist_b = 0x00007f53a948fe00 siglist_main = 0x00007f53a948fe00 siglist_a [2] = Just "Interrupt" siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt" However, if I compile A.hs (with ghc -c A.hs), then the output changes to: $ runhaskell Main.hs siglist_a = 0x0000000040378918 siglist_b = 0x00007fe7c029ce00 siglist_main = 0x00007fe7c029ce00 siglist_a [2] = Nothing siglist_b [2] = Just "Interrupt" siglist_main[2] = Just "Interrupt"

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Launch command on remote Windows machine, given admin credentials

    - by Bilal Aslam
    I have a Windows Server 2008 instance on Amazon EC2 (Amazon's cloud compute platform, which provides VMs in the cloud). It has an external IP, and I have an admin account on the box. I would like to 'bootstrap' this instance remotely i.e. I want to run commands to download, install and configure apps on it, all without having to log on even once. I have figured out how to do this to a remote, domain-joined computer using WMI. I can even use psexec to get what I want, as long as the remote computer is part of the domain. However, I have NOT been able to do for a remote computer on EC2. Here are some specific restrictions: 1) The remote computer is not part of my domain, hence no Kerberos 2) The remote computer does not have a cert I trust, or vice versa I am sure I am running into to some auth/trust restriction. Is there any way I can run a single command on the remote, given that I have admin privileges? I'm not tied down to using WMI, but I do need to run a command somehow. Feels like this should be a solved problem.

    Read the article

  • paperclip overwrites / resets S3 permissions for non-bucket-owners

    - by adriandz
    I have opened this as an issue on Github (http://github.com/thoughtbot/paperclip/issues/issue/225) but on the chance that I'm just doing this wrong, I thought I'd also ask about it here. If someone can tell me where I'm going wrong, I can close the issue and save the Paperclip guys some trouble. Issue: When using S3 for storage, and you wish your bucket to allow access to other users to whom you have granted access, Paperclip appears to overwrite the permissions on the bucket, removing access to these users. Process for duplication: Create a bucket in S3 and set up a Rails app with Paperclip to use this bucket for storage Add a user (for example, [email protected], the user for the video encoding service Zencoder) to the bucket, and grant this user List and Read/Write permissions. Upload a file. Refresh the permissions. The user you added will be gone. As well, a user "Everyone" with read permissions will have been added. The end result is that you cannot, so far as I can tell, retain desired permissions on your bucket when using Paperclip and S3. Can anyone help?

    Read the article

  • Optimizing processing and management of large Java data arrays

    - by mikera
    I'm writing some pretty CPU-intensive, concurrent numerical code that will process large amounts of data stored in Java arrays (e.g. lots of double[100000]s). Some of the algorithms might run millions of times over several days so getting maximum steady-state performance is a high priority. In essence, each algorithm is a Java object that has an method API something like: public double[] runMyAlgorithm(double[] inputData); or alternatively a reference could be passed to the array to store the output data: public runMyAlgorithm(double[] inputData, double[] outputData); Given this requirement, I'm trying to determine the optimal strategy for allocating / managing array space. Frequently the algorithms will need large amounts of temporary storage space. They will also take large arrays as input and create large arrays as output. Among the options I am considering are: Always allocate new arrays as local variables whenever they are needed (e.g. new double[100000]). Probably the simplest approach, but will produce a lot of garbage. Pre-allocate temporary arrays and store them as final fields in the algorithm object - big downside would be that this would mean that only one thread could run the algorithm at any one time. Keep pre-allocated temporary arrays in ThreadLocal storage, so that a thread can use a fixed amount of temporary array space whenever it needs it. ThreadLocal would be required since multiple threads will be running the same algorithm simultaneously. Pass around lots of arrays as parameters (including the temporary arrays for the algorithm to use). Not good since it will make the algorithm API extremely ugly if the caller has to be responsible for providing temporary array space.... Allocate extremely large arrays (e.g. double[10000000]) but also provide the algorithm with offsets into the array so that different threads will use a different area of the array independently. Will obviously require some code to manage the offsets and allocation of the array ranges. Any thoughts on which approach would be best (and why)?

    Read the article

  • JavaScript Exception/Error Handling Not Working

    - by Seán Hayes
    This might be a little hard to follow. I've got a function inside an object: f_openFRHandler: function(input) { console.debug('f_openFRHandler'); try{ //throw 'foo'; DragDrop.FileChanged(input); //foxyface.window.close(); } catch(e){ console.error(e); jQuery('#foxyface_open_errors').append('<div>Max local storage limit reached, unable to store new images in your browser. Please remove some images and try again.</div>'); } }, inside the try block it calls: this.FileChanged = function(input) { // FileUploadManager.addFileInput(input); console.debug(input); var files = input.files; for (var i = 0; i < files.length; i++) { var file = files[i]; if (!file.type.match(/image.*/)) continue; var reader = new FileReader(); reader.onload = (function(f, isLast) { return function(e) { if (files.length == 1) { LocalStorageManager.addImage(f.name, e.target.result, false, true); LocalStorageManager.loadCurrentImage(); //foxyface.window.close(); } else { FileUploadManager.addFileData(f, e.target.result); // add multiple files to list if (isLast) setTimeout(function() { LocalStorageManager.loadCurrentImage() },100); } }; })(file, i == files.length - 1); reader.readAsDataURL(file); } return true; LocalStorageManager.addImage calls: this.setItem = function(data){ localStorage.setItem('ImageStore', $.json_encode(data)); } localStorage.setItem throws an error if too much local storage has been used. I want to catch that error in f_openFRHandler (first code sample), but it's being sent to the error console instead of the catch block. I tried the following code in my Firebug console to make sure I'm not crazy and it works as expected despite many levels of function nesting: try{ (function(){ (function(){ throw 'foo' })() })() } catch(e){ console.debug(e) } Any ideas?

    Read the article

  • Practiaal rules for Django MiddleWare ordering?

    - by o_O Tync
    The official documentation is a bit messy: 'before' & 'after' are used for ordering MiddleWare in a tuple, but in some places 'before'&'after' refers to request-response phases. Also, 'should be first/last' are mixed and it's not clear which one to use as 'first'. I do understand the difference.. however it seems to complicated for a newbie in Django. Can you suggest some correct ordering for builtin MiddleWare classes (assuming we enable all of them) and — most importantly — explain WHY one goes before/after other ones? here's the list, with the info from docs I managed to find: UpdateCacheMiddleware Before those that modify 'Vary:' SessionMiddleware, GZipMiddleware, LocaleMiddleware GZipMiddleware Before any MW that may change or use the response body After UpdateCacheMiddleware: Modifies 'Vary:' ConditionalGetMiddleware Before CommonMiddleware: uses its 'Etag:' header when USE_ETAGS=True SessionMiddleware After UpdateCacheMiddleware: Modifies 'Vary:' Before TransactionMiddleware: we don't need transactions here LocaleMiddleware, One of the topmost, after SessionMiddleware, CacheMiddleware After UpdateCacheMiddleware: Modifies 'Vary:' After SessionMiddleware: uses session data CommonMiddleware Before any MW that may change the response (it calculates ETags) After GZipMiddleware so it won't calculate an E-Tag on gzipped contents Close to the top: it redirects when APPEND_SLASH or PREPEND_WWW CsrfViewMiddleware AuthenticationMiddleware After SessionMiddleware: uses session storage MessageMiddleware After SessionMiddleware: can use Session-based storage XViewMiddleware TransactionMiddleware After MWs that use DB: SessionMiddleware (configurable to use DB) All *CacheMiddleWare is not affected (as an exception: uses own DB cursor) FetchFromCacheMiddleware After those those that modify 'Vary:' if uses them to pick a value for cache hash-key After AuthenticationMiddleware so it's possible to use CACHE_MIDDLEWARE_ANONYMOUS_ONLY FlatpageFallbackMiddleware Bottom: last resort Uses DB, however, is not a problem for TransactionMiddleware (yes?) RedirectFallbackMiddleware Bottom: last resort Uses DB, however, is not a problem for TransactionMiddleware (yes?) (I will add suggestions to this list to collect all of them in one place)

    Read the article

  • Recommend me an architecture for this Facebook application

    - by andybaird
    Firstly, this question is subjective. There is not a right answer for this question and it really depends on what works for you. I'm hoping to use this thread as a breeding ground for ideas. I hope this is acceptable in this medium. I'm working on building a Facebook app that will be replacing an already popular app that gets ~50k hits a day. The original app is using a very typical LAMP setup with help from some Zend libraries for database layer extraction. For the most part the app worked well, except to solve a lot of issues I ended up fragmenting tables to speed things up. As a result, I couldn't do a lot of things with the app that I wanted to (namely any processing using aggregate data that needed to be returned quickly) So I'm starting to design plans for the next version of this application, and I have a whole bunch of new and cool features that I know would choke my current setup. I'm looking for technological recommendations of data storage methods that scale well. The database does not necessarily need to be relational, simple key/value storage would suffice (although at present time I know little to nothing about KV stores) What's your recommendation? How would you tackle this? I'd like to take a completely free approach to this -- although I am most familiar and comfortable using PHP, I want to leave all technical options open.

    Read the article

  • Decoding a jpg in the background in WP7

    - by Shahar Prish
    I have a bunch of apps in the marketplace, and so far I have been able, by changing my functionality or going the extra mile, to work around the issue of being unable to decode a jpg in the background into a WriteableBitmap. I am finding a situation where I can't think of good ways to "work around" the issue. I need to decode the image I get from MediaLibrary, reduce it's resolution to something managable (800x800), rotate it potentially and save to local storage. By far, the thing that takes the most time (80%) is decoding the bitmap to 800x800 - it takes between 700ms to 1000 ms. A user may add 7-10 images when starting, which translates to ~10 seconds of waiting for the images being added. I tried doing this lazily, but at some point you need to pay the piper and the app essentially stutters for ~1000ms at that point and the experience is not great. Is there an alternative I am missing for loading the image in the background somehow? (Note on why CreateOptions.BackgroundCreation is no good for me: It loads the image into a BitmapImage which is great if you want to just use it, but not so great for what I need to do which is create a copy in Isolated Storage).

    Read the article

  • Flash/uploadify filereference.upload not working, unknown reason

    - by BotskoNet
    I've been using the uploadify jquery for a long time - this tool has been working on several different servers for quite a while. Recently, on rackspace cloud sites accounts, the uploads stopped working. There's been no change to our code and our code works perfectly on other servers. I'm having trouble finding the issue, here's what happens: The filereference.upload is called, we provide all of the proper info - url, filename, etc The URL is http, and is correct. When visited through a web browser it works All debugging and tracing show that our code is running correctly The DataEvent.UPLOAD_COMPLETE_DATA evens never trigger The ProgressEvent.PROGRESS events never trigger. Server access logs and application traffic logs never show any requests coming to the upload script. Based on all of this info, it seems that: Our script works fine, file.upload is called properly Event.OPEN is fired. Traffic never reaches the server No io/http errors are called. file.upload does return false, but it also returns false on servers that the upload works, and the complete events are called. We're not using any firewalls or proxies on our end, and these tools work perfectly on other servers. But, these sites that are on the rackspace cloud sites all share this problem. However, a site that was in development a few months ago worked just fine on the same rackspace ckoud sites account. Trying the flash build from that version still won't work now. How can I go about finding out what's breaking between the file.upload and the reaching the server?

    Read the article

  • Is hardware accelerated CSS3 in Safari 4 & 5 broken, or my CSS and JS?

    - by Dan Forys
    Hi all, I've created a somewhat silly site that shows you the expected weather forecast for any city in the World. On webkit based browsers, when the weather is sunny a sun with CSS3 animated rotated sunbeams appears. This works fine on Chrome. An example (sunny, at the moment) page is: http://willitraintoday.co.uk/iceland/reykjavik/ However, when viewed in Safari 4 or 5 on Mac Snow Leopard, when the sun appears the sky background appears over it. Weirder still, as the cloud containing the advert moves across the sky, it squashes the main text. When the cloud reaches the left edge, the text appears wider than normal and starts squashing down again. I've tried: - Disabling the CSS3 animation; it works fine in Safari - Juggling the z-index of various elements; to no avail Is there something up with my Javascript or CSS, or is the hardware accelerated snow leopard Safari broken in this case? It seems not to happen in Safari 4 on Leopard, but I don't have Leopard any more to test myself. Grateful for any opinions!

    Read the article

  • C++ Problems with #import of .NET out-of-proc server.

    - by jm
    In C++ program, I am trying to #import TLB of .NET out of proc server. I get errors like: z:\server.tlh(111) : error C2146: syntax error : missing ';' before identifier 'GetType' z:\server.tlh(111) : error C2501: 'TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : error C2143: syntax error : missing ';' before 'tag::id' z:\server.tli(74) : error C2433: 'TypePtr' : 'inline' not permitted on data declarations z:\server.tli(74) : error C2501: '_TypePtr' : missing storage-class or type specifiers z:\server.tli(74) : fatal error C1004: unexpected end of file found The TLH looks like: ... _bstr_t GetToString ( ); VARIANT_BOOL Equals ( const _variant_t & obj ); long GetHashCode ( ); _TypePtr GetType ( ); long Open ( ); ... I am not really interested in the having the base object .NET object methods like GetType(), Equals(), etc. But GetType() seems to be causing problems. Some google research indicates I could #import MSCORLIB.TLB (or put it in path), but I can't get that to compile either. Any tips?

    Read the article

  • Windows Azure: Creating a subdirectories inside the blob

    - by veda
    I wanted to create some subdirectories inside my blob. But it is not working out well Here is my code protected void ButUpload_click(object sender, EventArgs e) { // store upladed file as a blob storage if (uplFileUpload.HasFile) { name = uplFileUpload.FileName; // get refernce to the cloud blob container CloudBlobContainer blobContainer = cloudBlobClient.GetContainerReference("documents"); if (textbox.Text != "") { name = textbox.Text + "/" + name; } // set the name for the uploading files string UploadDocName = name; // get the blob reference and set the metadata properties CloudBlockBlob blob = blobContainer.GetBlockBlobReference(UploadDocName); blob.Metadata["FILETYPE"] = "text"; blob.Properties.ContentType = uplFileUpload.PostedFile.ContentType; // upload the blob to the storage blob.UploadFromStream(uplFileUpload.FileContent); } } What I did is that, If I have to create a sub directory, I will enter the name of the sub directory in the textbox. for example, if I need to create a file named "test.txt" inside the sub directory "files" Then, my textbox.text = files and uplFileUpload.FileName = test.txt Now I will concatenate them and upload to the blob.. But it is not working well.. I am getting just https://test.core.windows.net/documents/files/ I am not getting the entire thing I was expecting https://test.core.windows.net/documents/files/test.txt What am I doing wrong... How to create sub directories inside the blob.

    Read the article

  • array of structures, or structure of arrays?

    - by Jason S
    Hmmm. I have a table which is an array of structures I need to store in Java. The naive don't-worry-about-memory approach says do this: public class Record { final private int field1; final private int field2; final private long field3; /* constructor & accessors here */ } List<Record> records = new ArrayList<Record>(); If I end up using a large number ( 106 ) of records, where individual records are accessed occasionally, one at a time, how would I figure out how the preceding approach (an ArrayList) would compare with an optimized approach for storage costs: public class OptimizedRecordStore { final private int[] field1; final private int[] field2; final private long[] field3; Record getRecord(int i) { return new Record(field1[i],field2[i],field3[i]); } /* constructor and other accessors & methods */ } edit: assume the # of records is something that is changed infrequently or never I'm probably not going to use the OptimizedRecordStore approach, but I want to understand the storage cost issue so I can make that decision with confidence. obviously if I add/change the # of records in the OptimizedRecordStore approach above, I either have to replace the whole object with a new one, or remove the "final" keyword. kd304 brings up a good point that was in the back of my mind. In other situations similar to this, I need column access on the records, e.g. if field1 and field2 are "time" and "position", and it's important for me to get those values as an array for use with MATLAB, so I can graph/analyze them efficiently.

    Read the article

  • Kohana Sessions data does not persist across pages in chrome and ir browsers

    - by user1062637
    Kohana Session data does not persist across pages opened in Chrome and IE browsers the same works fine in a Firefox browser Kohana version used is 2.3 session config files hold $config['driver'] = 'native'; /** * Session storage parameter, used by drivers. */ $config['storage'] = ''; /** * Session name. * It must contain only alphanumeric characters and underscores. At least one letter must be present. */ $config['name'] = 'NITWSESSID'; /** * Session parameters to validate: user_agent, ip_address, expiration. */ $config['validate'] = array(); /** * Enable or disable session encryption. * Note: this has no effect on the native session driver. * Note: the cookie driver always encrypts session data. Set to TRUE for stronger encryption. */ $config['encryption'] = FALSE; /** * Session lifetime. Number of seconds that each session will last. * A value of 0 will keep the session active until the browser is closed (with a limit of 24h). */ $config['expiration'] = 2700; /** * Number of page loads before the session id is regenerated. * A value of 0 will disable automatic session id regeneration. */ $config['regenerate'] = 0; /** * Percentage probability that the gc (garbage collection) routine is started. */ $config['gc_probability'] = 2; Help needed urgently

    Read the article

  • SqlServer slow on production environment

    - by Lieven Cardoen
    I have a weird problem in a production environment at a customer. I can't give any details on the infrastructure except that sql server runs on a virtual server and the data, log and filestream file are on another storage server (data and filestream together and log on a seperate server). Now, there's this query that when we run it gives these durations (first we clear the cache): 300ms, 20ms, 15ms, 17ms,... First time it takes longer, but from then on it is cached. At the customer, on a sql server that is more powerfull, these are the durations (I didn't have the rights to clear the cache. Will try this tomorrow). 2500ms, 2600ms, 2400ms, ... The query can be improved, that's right, but that's not the question here. How would you tackle this? I don't know where to go from here. The servers at this customer are really more powerfull but they do have virtual servers (we don't). What could be the cause... - Not enough memory? - Fragmentation? - Physical storage? I know it can be a lot of things, but maybe some of you have got some info for me on how to go on with this issue...

    Read the article

  • Multidimensional array (parent and childs)

    - by Juan
    I have a category system in a MySQL database with parents and childs. The database only stores the id of it''s immediate parent (or 0 if on root). Since the system allows multiple subcategories there are cases of multiple childs. For example [98] Storage [1] External [3] Pendrives [4] Portable hhdds [2] Internal [5] Sata hhdd [6] IDE hhdd [...] [99] Clothing The database would be id parent_id name 1 98 External 2 98 Internal 3 1 Pendrives 4 1 Portable 5 2 Sata 6 2 IDE 98 0 Storage 99 0 Clothing I also have a products table with a category id and I need to get a list of all the products in the first level of categories. For example: Product Category A 3 B 4 C 5 D 6 E 74 Should return 98: A, B, C, D 99: X, Y, Z... I'm stuck and I can't think of the logic to retrieve it in that way. I started by getting the IDs of all the categories that aren't in the first level by: while ($row = mysql_fetch_assoc($result)) { if ($row['parent_id'] != 0) { $level1[$i]['name'] = utf8_encode($row['categories_name']); $level1[$i]['id'] = $row['categories_id']; } $i++; } but I'm having a burnout and can't think of a way that would nest them. I thought some kind of while but it's infinite :P Any ideas please?

    Read the article

< Previous Page | 194 195 196 197 198 199 200 201 202 203 204 205  | Next Page >