Search Results

Search found 22656 results on 907 pages for 'free amazon plugin'.

Page 182/907 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Understanding CLR 2.0 Memory Model

    - by Eloff
    Joe Duffy, gives 6 rules that describe the CLR 2.0+ memory model (it's actual implementation, not any ECMA standard) I'm writing down my attempt at figuring this out, mostly as a way of rubber ducking, but if I make a mistake in my logic, at least someone here will be able to catch it before it causes me grief. Rule 1: Data dependence among loads and stores is never violated. Rule 2: All stores have release semantics, i.e. no load or store may move after one. Rule 3: All volatile loads are acquire, i.e. no load or store may move before one. Rule 4: No loads and stores may ever cross a full-barrier (e.g. Thread.MemoryBarrier, lock acquire, Interlocked.Exchange, Interlocked.CompareExchange, etc.). Rule 5: Loads and stores to the heap may never be introduced. Rule 6: Loads and stores may only be deleted when coalescing adjacent loads and stores from/to the same location. I'm attempting to understand these rules. x = y y = 0 // Cannot move before the previous line according to Rule 1. x = y z = 0 // equates to this sequence of loads and stores before possible re-ordering load y store x load 0 store z Looking at this, it appears that the load 0 can be moved up to before load y, but the stores may not be re-ordered at all. Therefore, if a thread sees z == 0, then it also will see x == y. If y was volatile, then load 0 could not move before load y, otherwise it may. Volatile stores don't seem to have any special properties, no stores can be re-ordered with respect to each other (which is a very strong guarantee!) Full barriers are like a line in the sand which loads and stores can not be moved over. No idea what rule 5 means. I guess rule 6 means if you do: x = y x = z Then it is possible for the CLR to delete both the load to y and the first store to x. x = y z = y // equates to this sequence of loads and stores before possible re-ordering load y store x load y store z // could be re-ordered like this load y load y store x store z // rule 6 applied means this is possible? load y store x // but don't pop y from stack (or first duplicate item on top of stack) store z What if y was volatile? I don't see anything in the rules that prohibits the above optimization from being carried out. This does not violate double-checked locking, because the lock() between the two identical conditions prevents the loads from being moved into adjacent positions, and according to rule 6, that's the only time they can be eliminated. So I think I understand all but rule 5, here. Anyone want to enlighten me (or correct me or add something to any of the above?)

    Read the article

  • Using a "local" S3 emulation layer as a replacement for HDFS?

    - by user183394
    I have been testing out the most recent Cloudera CDH4 hadoop-conf-pseudo (i.e. MRv2 or YARN) on a notebook, which has 4 cores, 8GB RAM, an Intel X25MG2 SSD, and runs a S3 emulation layer my colleagues and I wrote in C++. The OS is Ubuntu 12.04LTS 64bit. So far so good. Looking at Setting up hadoop to use S3 as a replacement for HDFS, I would like to do it on my notebook. Nevertheless, I can't find where I can change the jets3t.properties for setting the end point to localhost. I downloaded the hadoop-2.0.1-alpha.tar.gz and searched the source without finding out a clue. There is a similar Q on SO Using s3 as fs.default.name or HDFS?, but I want to use our own lightweight and fast S3 emulation layer, instead of AWS S3, for our experiments. I would appreciate a hint as to how I can change the end point to a different hostname. Regards, --Zack

    Read the article

  • What does the q in a q-grammar stand for?

    - by Aru
    So I've been reading sites and the classic books on compilers, reading about s-grammar and q-grammars I wondered what the s and q stand for, I think the s stands for simple grammar. While the q...well, I have no idea. What does the q in a q-grammar stand for?

    Read the article

  • Serving files over HTTPS dynamically based on request.ssl? with Attachment_fu

    - by Marston A.
    I see there is a :user_ssl option in attachment_fu which checks the amazon_s3.yml file in order to serve files via https:// In the s3_backend.rb you have this method: def self.protocol @protocol ||= s3_config[:use_ssl] ? 'https://' : 'http://' end But this then makes it serve ALL s3 attachments with SSL. I'd like to make it dynamic depending if the current request was made with https:// i.e: if request.ssl? @protocol = "https://" else @protocol = "http://" end How can I make it work in this way? I've tried modifying the method and then get the NameError: undefined local variable or method `request' for Technoweenie::AttachmentFu::Backends::S3Backend:Module error

    Read the article

  • Theory of computation - Using the pumping lemma for CFLs

    - by Tony
    I'm reviewing my notes for my course on theory of computation and I'm having trouble understanding how to complete a certain proof. Here is the question: A = {0^n 1^m 0^n | n>=1, m>=1} Prove that A is not regular. It's pretty obvious that the pumping lemma has to be used for this. So, we have |vy| = 1 |vxy| <= p (p being the pumping length, = 1) uv^ixy^iz exists in A for all i = 0 Trying to think of the correct string to choose seems a bit iffy for this. I was thinking 0^p 1^q 0^p, but I don't know if I can obscurely make a q, and since there is no bound on u, this could make things unruly.. So, how would one go about this?

    Read the article

  • Java: how to tell if a line in a text file was supposed to be blank?

    - by defn
    I'm working on a project in which I have to read in a Grammar file (breaking it up into my data structure), with the goal of being able to generate a random "DearJohnLetter". My problem is that when reading in the .txt file, I don't know how find out whether the file was supposed to be a completely blank line or not, which is detrimental to the program. Here is an example of part of the file, How do i tell if the next line was supposed to be a blank line? (btw I'm just using a buffered reader) Thanks! <start> I have to break up with you because <reason> . But let's still <disclaimer> . <reason> <dubious-excuse> <dubious-excuse> , and also because <reason> <dubious-excuse> my <person> doesn't like you I'm in love with <another> I haven't told you this before but <harsh> I didn't have the heart to tell you this when we were going out, but <harsh> you never <romantic-with-me> with me any more you don't <romantic> any more my <someone> said you were bad news

    Read the article

  • Wikipedia article's

    - by Algorist
    Hi, I am doing a project, for which I need to know all the wikipedia article names(I don't need the content). Is there a place where I can download this data. Thank you Bala

    Read the article

  • Recognizing terminals in a CFG production previously not defined as tokens.

    - by kmels
    I'm making a generator of LL(1) parsers, my input is a CoCo/R language specification. I've already got a Scanner generator for that input. Suppose I've got the following specification: COMPILER 1. CHARACTERS digit="0123456789". TOKENS number = digit{digit}. decnumber = digit{digit}"."digit{digit}. PRODUCTIONS Expression = Term{"+"Term|"-"Term}. Term = Factor{"*"Factor|"/"Factor}. Factor = ["-"](Number|"("Expression")"). Number = (number|decnumber). END 1. So, if the parser generated by this grammar receives a word "1+1", it'd be accepted i.e. a parse tree would be found. My question is, the character "+" was never defined in a token, but it appears in the non-terminal "Expression". How should my generated Scanner recognize it? It would not recognize it as a token. Is this a valid input then? Should I add this terminal in TOKENS and then consider an error routine for a Scanner for it to skip it? How does usual language specifications handle this?

    Read the article

  • nginx error: (99: Cannot assign requested address)

    - by k-g-f
    I am running Ubuntu Hardy 8.04 and nginx 0.7.65, and when I try starting my nginx server: $ sudo /etc/init.d/nginx start I get the following error: Starting nginx: [emerg]: bind() to IP failed (99: Cannot assign requested address) where "IP" is a placeholder for my IP address. Does anybody know why that error might be happening? This is running on EC2. My nginx.conf file looks like this: user www-data www-data; worker_processes 4; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; access_log /usr/local/nginx/logs/access.log; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 3; gzip on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; include /usr/local/nginx/sites-enabled/*; } and my /usr/local/nginx/sites-enabled/example.com looks like: server { listen IP:80; server_name example.com; rewrite ^/(.*) https://example.com/$1 permanent; } server { listen IP:443 default ssl; ssl on; ssl_certificate /etc/ssl/certs/myssl.crt; ssl_certificate_key /etc/ssl/private/myssl.key; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP; server_name example.com; access_log /home/example/example.com/log/access.log; error_log /home/example/example.com/log/error.log; }

    Read the article

  • How to scale MongoDB

    - by terence410
    I know that MongoDB can scale vertically. What about if I running out of disk? I am currently using EC2 with EBS. As you know, I have to assign EBS for a fixed size. What if the mongodb growth bigger than the EBS size? Do I have to create a larger EBS and Copy & Paste the files? Or shall we start more MongoDB instance and each connect to different EBS disk? In such case, I could connect to a different instance for different databases.

    Read the article

  • Freeing memory twice

    - by benjamin button
    Hi, AFAIK, freeing a NULL pointer will result in nothing. I mean nothing is being done by the compiler/no functionality is performed. Still, I do see some statements where people say that one of the scenarios where memory corruption can occur is "freeing memory twice"? Is this still true?

    Read the article

  • code throws std::bad_alloc, not enough memory or can it be a bug?

    - by Andreas
    I am parsing using a pretty large grammar (1.1 GB, it's data-oriented parsing). The parser I use (bitpar) is said to be optimized for highly ambiguous grammars. I'm getting this error: 1terminate called after throwing an instance of 'std::bad_alloc' what(): St9bad_alloc dotest.sh: line 11: 16686 Aborted bitpar -p -b 1 -s top -u unknownwordsm -w pos.dfsa /tmp/gsyntax.pcfg /tmp/gsyntax.lex arbobanko.test arbobanko.results Is there hope? Does it mean that it has ran out of memory? It uses about 15 GB before it crashes. The machine I'm using has 32 GB of RAM, plus swap as well. It crashes before outputting a single parse tree. The parser is an efficient CYK chart parser using bit vector representations; I presume it is already near the limit of memory efficiency. If it really requires too much memory I could sample from the grammar rules, but this will decrease parse accuracy of course.

    Read the article

  • How to load secure S3 images into Flex with temporary URLs

    - by Yarin
    I have some secure images on S3 that I need to load into Flex. I was expecting to be able to do this using signed temporary URLs but can't get it working. I know the URLs I'm generating are correct, because they load fine in my browsers' address bar. Moreover, Flex has no problem loading my images with a non-signed url when they are public, but as soon as I try signing the urls all the images fail, whether public or not. I've tried image.source = signedURL, image.load(signedURL), etc. If I try loading the file with URLLoader/URLStream, it looks like I'm getting the data OK, but I'm not sure how to translate those results to an Image control. Is this just an issue with the Image control not being able to recognize signed urls? Do I have to load the image from a byte array? What would that look like?

    Read the article

  • How to put 1000 lightweight server applications in the cloud

    - by Dan Bird
    The company I work for sells a commercial desktop/server app that runs on any non dedicated Windows PC or server and uses Tomcat for all interactions with the application. Customers are asking that we host their instance of the application so they don't have to run it locally on their own servers. The app is lightweight and an average server, in theory, could handle 25-50 instances before users would notice a slowdown. However only 1 instance can run per Windows instance (because the application writes to a common registry branch) so we'd need something like VMWare to create 25-50 Windows instances. We know we eventually need to reprogram to make it truly cloud-worthy but what would you recommend for a server farm or whatever for this? We don't have the setup to purchase our own servers so we must use a 3rd party. We have budgeted $500 - $1000 per year per customer for this service. Thanks in advance for your suggestions, experiences and guidance.

    Read the article

  • How do you prevent Git from printing 'remote:' on each line of the output of a post-recieve hook?

    - by Matt Hodan
    I recently configured an EC2 instance with a Git deployment workflow that resembles Heroku, but I can't seem to figure out how Heroku prevents the Git post-receive hook from outputting 'remote:' on each line. Consider the following two examples (one from my EC2 project and one from a Heroku project): My EC2 project: git push prod master Counting objects: 9, done. Delta compression using up to 2 threads. Compressing objects: 100% (5/5), done. Writing objects: 100% (5/5), 456 bytes, done. Total 5 (delta 3), reused 0 (delta 0) remote: remote: Receiving push remote: Deploying updated files (by resetting HEAD) remote: HEAD is now at bf17da8 test commit remote: Running bundler to install gem dependencies remote: Fetching source index for http://rubygems.org/ remote: Installing rake (0.8.7) remote: Installing abstract (1.0.0) ... remote: Installing railties (3.0.0) remote: Installing rails (3.0.0) remote: Your bundle is complete! It was installed into ./.bundle/gems remote: Launching (by restarting Passenger)... done remote: To ssh://[email protected]/~/apps/app_name e8bd06f..bf17da8 master -> master Heroku: $> git push heroku master Counting objects: 179, done. Delta compression using up to 2 threads. Compressing objects: 100% (89/89), done. Writing objects: 100% (105/105), 42.70 KiB, done. Total 105 (delta 53), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected -----> Gemfile detected, running Bundler version 1.0.3 Unresolved dependencies detected; Installing... Using --without development:test Fetching source index for http://rubygems.org/ Installing rake (0.8.7) Installing abstract (1.0.0) ... Installing railties (3.0.0) Installing rails (3.0.0) Your bundle is complete! It was installed into ./.bundle/gems Compiled slug size is 4.8MB -----> Launching... done http://your_app_name.heroku.com deployed to Heroku To [email protected]:your_app_name.git 3bf6e8d..642f01a master -> master

    Read the article

  • Where are the snapshot files?

    - by KiD0M4N
    Hey guys, The documentation states that the snapshots are persisted to S3 for persistence... I wanted to leverage that and create a instance of my server in a different region (my original server is in APAC, I wanted to create an instance in US-East.) I have logged into my account via CloudBerry S3, but cannot see any files in the S3 account (sorry, I am beginner in AWS.) Also, switching over to US-East removes the snapshot from view... so how can I create another instance using the same EBS volume in a different region? Why can't I see the snapshot files in my S3? Regards, Karan Misra

    Read the article

  • Implicit Memory Barriers

    - by foo
    let's say i have variables A, B and C that two threads (T1, T2) share. i have the following code: //T1 //~~ A = 1; B = 1; C = 1; InterlockedExchange(ref Foo, 1); //T2 (executes AFTER T1 calls InterlockedExchange) //~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ InterlockedExchange(ref Bar, 1); WriteLine(A); WriteLine(B); WriteLine(C); Question: does calling InterlockedExchange (implicit full fence) on T1 and T2, gurentess that T2 will "See" the write done by T1 before the fence? (A, B and C variables), even though those variables are not plance on the same cache-line as Foo and Bar?

    Read the article

  • Getting the download count of a specific S3 object

    - by phidah
    I've got a number of S3 objects that are available to my customers. Since I'd like to bill my customers by usage, I wondered if there is any smart kind of way to get the number of times a given file has been downloaded? Alternatively, I suppose I could parse the log files provided by S3, but with 10m+ fetches per customer this might be bit of a task. Any ideas?

    Read the article

  • Large file download for a Rails project

    - by Horace Ho
    One client project will be online two months later. One of the requirements changed is to support large files (10 to 15MB per RAW camera file, expected 1000 to 5000 files download per day) download worldwide for their customers. The process will be: there is upload screen via paperclip to the rails local public folder a hourly task to upload to web storage (S3?) update the download url from paperclip url to the web url Questions: is there a gem/plug-in for this purpose? if no, any gem/plug-in for S3 to recommend? Questions about the storage provider: is S3 recommended? or other service to recommend? The baseline is: the client's web server does not and will not have the bandwidth to handle the downloads. Thanks

    Read the article

  • finding last snapshot using boto

    - by shantanuo
    I have read the explanation about "describe_cluster_snapshots" from ... http://docs.pythonboto.org/en/latest/ref/redshift.html#boto.redshift.layer1.RedshiftConnection.create_cluster It has an option start_time and end_time but there is no way to sort it. How do I get the id of the latest snapshot using boto? Here is what I have tried but it does not seem to return the last snapshot. mysnap=conn.describe_cluster_snapshots() mysnapidentifier=mysnap['DescribeClusterSnapshotsResponse']['DescribeClusterSnapshotsResult']['Snapshots'][-1]['SnapshotIdentifier']

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >