Search Results

Search found 307 results on 13 pages for 'crab bucket'.

Page 9/13 | < Previous Page | 5 6 7 8 9 10 11 12 13  | Next Page >

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

  • links for 2011-01-03

    - by Bob Rhubart
    Using Solaris zfs + iscsi targets with Oracle VM (Wim Coekaerts Blog) "I was playing with my Oracle VM setup and needed some shared storage that was block based. I did not have a storage array available but I did have a Solaris box, that I use for Oracle VDI, available." - Wim Coekaerts (tags: oracle otn solaris oraclevm virtualization) DanT's GridBlog: Oracle Grid Engine: Changes for a Bright Future at Oracle "Today, we are entering a new chapter in Oracle Grid Engine’s life. Oracle has been working with key members of the open source community to pass on the torch for maintaining the open source code base to the Open Grid Scheduler project hosted on SourceForge." - Dan Templeton (tags: oracle gridengine) Oracle Fusion Middleware Security: How do I secure my services? "I've been up early for a couple of days talking to a customer about how they should secure their services,' says Chris Johnson. "I'm going to tell you what I told them." (tags: oracle fusionmiddleware security) OldSpice your Innovation - Dangers of Status Quo E2.0 | Enterprise 2.0 Blogs "If organizations only leverage E2.0 technologies in a 'me too' fashion, they are essentially using a bucket to bail water from a leaking ship." - John Brunswick (tags: oracle enteprise2.0) The Aquarium: GlassFish in 2011 - What to expect A look into the Glassfish crystal ball... (tags: oracle glassfish) Andrejus Baranovskis's Blog: Fusion Middleware 11g Security - Retrieve Security Groups from ADF 11g Oracle ACE Director Andrejus Baranovskis shows you what to do when you need to access security information directly from an ADF 11g application. (tags: oracle otn fusionmiddleware security adf) @eelzinga: Book review : Oracle SOA Suite 11g R1 Developer's Guide "What I really liked in this book...was the compare/description of the Oracle Service Bus. The authors did a great job on describing functionality of components existing in the SOA Suite and how to model them in your own process." - Oracle ACE Eric ElZinga (tags: oracle oracleace soa bookreview soasuite)

    Read the article

  • Floodfill algorithm for GO

    - by user1048606
    The floodfill algorithm is used in the bucket tool in MS paint and photoshop, but it can also be used for GO and minesweeper. http://en.wikipedia.org/wiki/Flood_fill In go you can capture groups of stones, this website portrays it with two stones. http://www.connectedglobe.com/mindy/cap6.html This is my floodfill method in Java, it is not capturing a group of stones and I have no idea why because to me it makes sense. public void floodfill(int turn, int col, int row){ for(int a = col; a<19; a++){ for(int b = row; b<19; b++){ if(turn == black){ if(stones[col][row] == white){ stones[col][row] = 0; floodfill(black, col-1, row); floodfill(black, col+1, row); floodfill(black, col, row-1); floodfill(black, col, row+1); } } } } } It searches up, down, left, right for all the stones on the board. If the stones are white it captures them by making them 0, which represents empty.

    Read the article

  • Config transformations and “TransformXml task failed” error message

    - by Troy Hunt
    I’ve just enabled config transformations on a .NET 3.5 project in VS2010 RC after watching Scott Hanselman’s video on web deployment. Unfortunately every time I go to publish I now get the following error: The "TransformXml" task failed unexpectedly. System.UriFormatException: Invalid URI: The URI is empty. at System.Uri.CreateThis(String uri, Boolean dontEscape, UriKind uriKind) at System.Uri..ctor(String uriString) at Microsoft.Web.Publishing.Tasks.TransformXml.Execute() at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult) If I take a brand new VS2010 web application which already has the config transformations by default I don’t have a problem so I suspect my issue is project related. Has anyone come across this before or have any ideas on a fix?

    Read the article

  • How to process images with paperclip on Heroku?

    - by Yuri
    I use Heroku for my app. I want to auto-orient image and then to resize it. So I do: class User < ActiveRecord::Base Paperclip.options[:swallow_stderr] = false has_attached_file :photo, :styles => { :square => "100%", :large => "100%" }, :convert_options => { :square => "-auto-orient -geometry 70X70#", :large => "-auto-orient -geometry X300" }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'mybucket' validates_attachment_size :photo, :less_than => 5.megabyte end It does not work with error: There was an error processing the thumbnail for stream.20143 What am I doing wrong?

    Read the article

  • BizTalk 2009 fault when using POP3 adapter

    - by Sergej Andrejev
    Have anybody came across a problem with POP3 adapter in BT2009? When POP3 adapter is added to locations and assigned to port following errors in windows log appear. Error 1 Faulting application name: BTSNTSvc.exe, version: 3.8.368.0, time stamp: 0x49b1dadf Faulting module name: KERNELBASE.dll, version: 6.1.7600.16385, time stamp: 0x4a5bdaae Exception code: 0xe0434f4d Fault offset: 0x00009617 Faulting process id: 0x1d2c Faulting application start time: 0x01ca459d0255429e Faulting application path: C:\Program Files\Microsoft BizTalk Server 2009\BTSNTSvc.exe Faulting module path: C:\Windows\system32\KERNELBASE.dll Report Id: 4131d61a-b190-11de-b230-0017f2bdecec Error 2 Fault bucket , type 0 Event Name: APPCRASH Response: Not available Cab Id: 0 Problem signature: P1: BTSNTSvc.exe P2: 3.8.368.0 P3: 49b1dadf P4: KERNELBASE.dll P5: 6.1.7600.16385 P6: 4a5bdaae P7: e0434f4d P8: 00009617 P9: P10: Attached files: C:\Users\sandrejev\AppData\Local\Temp\BizTalkTraceLog.bin C:\Users\sandrejev\AppData\Local\Temp\WER9C3F.tmp.appcompat.txt C:\Users\sandrejev\AppData\Local\Temp\WER9D49.tmp.WERInternalMetadata.xml C:\Users\sandrejev\AppData\Local\Temp\WERB0AC.tmp.mdmp C:\Users\sandrejev\AppData\Local\Temp\WERB2B0.tmp.WERDataCollectionFailure.txt These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_BTSNTSvc.exe_5ef546265feb369cdca82e8be551ee898dc2106d_cab_1a79b2cb Analysis symbol: Rechecking for solution: 0 Report Id: e681f07b-b18f-11de-b230-0017f2bdecec Report Status: 4

    Read the article

  • Filesize with SWFUpload and Amazon S3

    - by Dodinas
    Hello all, I'm currently using SWFUpload to upload files to my S3 bucket. And it's working great. I'm using the script from a website here: http://www.anedix.com/news/article/50 Again, the upload to my S3 works fine, however, I've been running into an issue when attempting to upload larger files. It seems that I cannot upload anything over 50MB. I have tried this from both my webhost and locally, using my local testing environment. My question is this: When uploading with SWFUpload, it should be going straight to Amazon S3, correct? If so, then PHP settings such as MAX_UPLOAD_SIZE should not affect it? (Even though in my local environment, I've set it to 1024MB.) Essentially, what the script does is, shows that it's uploading the file (it takes the appropriate amount of time), redirects to the success page, and does not throw any errors. Any ideas on why this would be happening, or how I can troubleshoot this? Thanks!

    Read the article

  • Publish limit on Facebook's Graph API

    - by Andy
    Hey guys, I've been using the Graph API for a while. One feature of my application is that it allows a user to post a message on their friends walls (dont worry it is not spam). Anyway...there is a limit on the API and it will only allow a certain number of posts before failing. I've read on the facebook bucket allocation limits but my app's limit has not moved. It was 26 when i created the app. It is still 26 even though there are about 20 users. What can I do to increase my pulish limit? And I promise this app is not used for anything spam related.

    Read the article

  • Java webapp: adding a content-disposition header to force browsers "save as" behavior

    - by WizardOfOdds
    Even though it's not part of HTTP 1.1/RFC2616 webapps that wish to force a resource to be downloaded (rather than displayed) in a browser can use the Content-Disposition header like this: Content-Disposition: attachment; filename=FILENAME Even tough it's only defined in RFC2183 and not part of HTTP 1.1 it works in most web browsers as wanted. So from the client side, everything is good enough. However on the server-side, in my case, I've got a Java webapp and I don't know how I'm supposed to set that header, especially in the following case... I'll have a file (say called "bigfile") hosted on an Amazon S3 instance (my S3 bucket shall be accessible using a partial address like: files.mycompany.com/) so users will be able to access this file at files.mycompany.com/bigfile. Now is there a way to craft a servlet (or a .jsp) so that the Content-Disposition header is always added when the user wants to download that file? What would the code look like and what are the gotchas, if any?

    Read the article

  • windbg/cdb hangs when bp hit

    - by aaron
    I have a problem where cdb or windbg hangs frequently, but not all the time, when I'm debugging with it and I attach to a specific application on my machine. I found this article: http://www.nynaeve.net/?p=164 which talks about a symbol loading race condition being the problem, but I can force load the symbols, actually have a breakpoint in-app work, and still have it hang elsewhere. Here is the stack from cdb itself when I attach to it with another debugger: ntdll!NtReadFile kernel32!ReadFile cdb!ReadNonConLine cdb!ConIn cdb!MainLoop cdb!main !analyze reports that APPLICATION_HANG_BusyHang is the problem bucket, and 'ReadNonConLine' is the offending function. as far as the stack goes: ffffffff`fffffffe 00000000`00000000 00000001`3f641498 00000000`0014ea50 : kernel32!ReadFile+0x86 00000000`000002a4 00000000`0014ebb0 00000000`00001000 00000000`00000000 : cdb!ReadNonConLine+0x6d ReadNonConLine has the string "g" at 0014ebb0 passed as a param, which may be part of the command I had at the hanging breakpoint (it was something like bp foo "dt a; g") ReadFile takes a handle as its first parameter. I'm surprised by the value -2, though, that doesn't look valid. Any help is appreciated. Thanks! Aaron

    Read the article

  • How to store an object in Riak with the Java client?

    - by Jonas
    I have setup Riak on a Ubuntu machine, and it seam to work if I do riak ping. Now I would like to use the Riak Java client to store an object, but it doesn't work. I get com.basho.riak.client.response.RiakIORuntimeException when I try to store an object. What am I doing wrong? Is there a way to test if I can access riak from my java client? Do I have to create a Bucket first? how? import com.basho.riak.client.RiakClient; import com.basho.riak.client.RiakObject; import com.basho.riak.client.response.FetchResponse; public class RiakTest { public static void main(String[] args) { // connect RiakClient riak = new RiakClient("http://192.168.1.107:8098/riak"); // create object RiakObject o = new RiakObject("mybucket", "mykey", "myvalue"); // store riak.store(o); } }

    Read the article

  • Need an encrypted online source code backup service.

    - by camelCase
    Please note this is not a question about online/hosted SVN services. I am working on a home based, solo developer, project that now has commercial significance and it is time to think about remote source code backup. There is no need for file level check in/out, all I need is once a day or once a week directory level snapshot to remote storage. Automatic encryption would be a bonus to protect my IP. What I have in mind is some sort of GUI interface app that will squirt a source code snapshot off to an Amazon S3 bucket on an automatic schedule. (My development PC runs on MS Windows.)

    Read the article

  • Are JSF 2.x ViewScoped Beans Thread Safe?

    - by Mark
    I've been googling for a couple hours on this issue to no eval. WELD docs and the CDI spec are pretty clear regarding thread safety of the scopes provided. For example: Application Scope - not safe Session Scope - not safe Request Scope - safe, always bound to a single thread Conversation Scope - safe (due to the WELD proxy serializing access from multiple request threads) I can't find anything on the View Scope defined by JSF 2.x. It is in roughly the same bucket as the Conversation Scope in that it is very possible for multiple requests to hit the scope concurrently despite it being bound to a single view / user. What I don't know is if the JSF implementation serializes access to the bean from multiple requests. Anyone have knowledge of the spec or of the Morraja/MyFaces implementations that could clear this up?

    Read the article

  • What does BucketAlreadyOwnedByYou error (from Amazon S3) actually mean? I can't find any reason affe

    - by Phyo Wai Win
    Hi there, I am using Amazon S3 to back up my Rails app's mysql database. And I am using astrails-safe plugin to do that and I got the "Your previous request to create the named bucket succeeded and you already own it. (AWS::S3::BucketAlreadyOwnedByYou)" error back whenever I try to update it. I have checked that the folder in which I am going to back up is there in my account already. It's just that I can't upload the files from the code (using astrails-safe). Any help would be appreciated! Thanks.

    Read the article

  • SQL query performance optimization (TimesTen)

    - by Sergey Mikhanov
    Hi community, I need some help with TimesTen DB query optimization. I made some measures with Java profiler and found the code section that takes most of the time (this code section executes the SQL query). What is strange that this query becomes expensive only for some specific input data. Here’s the example. We have two tables that we are querying, one represents the objects we want to fetch (T_PROFILEGROUP), another represents the many-to-many link from some other table (T_PROFILECONTEXT_PROFILEGROUPS). We are not querying linked table. These are the queries that I executed with DB profiler running (they are the same except for the ID): Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; < 1169655247309537280 > < 1169655249792565248 > < 1464837997699399681 > 3 rows found. Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; < 1169655247309537280 > 1 row found. This is what I have in the profiler: 12:14:31.147 1 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272 12:14:31.147 2 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:47) cmdType:100, cmdNum:1146695. 12:14:31.147 3 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.147 4 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 5 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 6 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 7 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 8 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:35.243 9 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928 12:14:35.243 10 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:44) cmdType:100, cmdNum:1146697. 12:14:35.243 11 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 12 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 13 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 14 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; It’s clear that the first query took almost 100ms, while the second was executed instantly. It’s not about queries precompilation (the first one is precompiled too, as same queries happened earlier). We have DB indices for all columns used here: T_PROFILEGROUP.M_ID, T_PROFILECONTEXT_PROFILEGROUPS.M_ID_OID and T_PROFILECONTEXT_PROFILEGROUPS.M_ID_EID. My questions are: Why querying the same set of tables yields such a different performance for different parameters? Which indices are involved here? Is there any way to improve this simple query and/or the DB to make it faster? UPDATE: to give the feeling of size: Command> select count(*) from T_PROFILEGROUP; < 183840 > 1 row found. Command> select count(*) from T_PROFILECONTEXT_PROFILEGROUPS; < 2279104 > 1 row found.

    Read the article

  • MAD method compression function

    - by Jacques
    I ran across the question below in an old exam. My answers just feels a bit short and inadequate. Any extra ideas I can look into or reasons I have overlooked would be great. Thanx Consider the MAD method compression function, mapping an object with hash code i to element [(3i + 7)mod9027]mod6000 of the 6000-element bucket array. Explain why this is a poor choice of compression function, and how it could be improved. I basically just say that the function could be improved by changing the value for p (or 9027) to an prime number and choosing an other constant for a (or 3) could also help.

    Read the article

  • Is there a dictionary about common programming vocabulary?

    - by _simon_
    When I need a name for a new class that extends behaviour of an existing class, I usually have hard time to come up with a name for it. For example, if I have a class MyClass, then the new class could be named something like MyClassAdapter, MyClassCalculator, MyClassDispatcher, MyClassParser,... This new name should of course represent the behaviour of the class and would ideally be same as the design pattern in which it is used (Adapter, Decorator, Factory,...). But since we don't overuse design patterns, this is not always the solution :) So, do you know for a dictionary or a list of common words, that we can use to represent the behaviour of the class, containing a short description of the expected behaviour? Some examples: replicator, shadow, token, acceptor, worker, mapper, driver, bucket, socket, validator, wrapper, parser, verifier,... You could also look at this list as a cheat sheet for metaphors, with which you can better understand your problem domain.

    Read the article

  • .NET & ASP vs PHP

    - by gargantaun
    Earlier today I asked wether it would be a good idea to develop websites using C#. Most of the answers pointed towards .NET and ASP. Currently I develop with PHP. I've dabbled with Python and RoR but I always come back to PHP. This is the first time I've looked at .NET and ASP. A bucket load of Google searches later I'm not really seeing much support for ASP online but then it all seems a bit Biased towards PHP/Apache/MySQL. It looks like there's a fair amount of .NET and ASP folk around here so I figured it's worth a shot asking for their input in attempt to try and address the balance in my own head. It can't all be bad. What advantages are there to .NET and ASP over PHP?

    Read the article

  • Amazon S3 copy request timeouts

    - by jean27
    We're uploading files to a temporary folder in a bucket. After that, we're trying to copy the uploaded files to its actual folder then delete the files in the temporary folder. It doesn't timeout when working with a single file. We're using the ThreeSharp API. Stack Trace: [WebException: The operation has timed out] System.Net.HttpWebRequest.GetRequestStream() +5322142 Affirma.ThreeSharp.Query.ThreeSharpQuery.GenerateAndSendHttpWebRequest(Request request) in C:\Consulting\Amazon\Amazon S3\Affirma.ThreeSharp\Affirma.ThreeSharp\Query\ThreeSharpQuery.cs:386 Affirma.ThreeSharp.Query.ThreeSharpQuery.Invoke(Request request) in C:\Consulting\Amazon\Amazon S3\Affirma.ThreeSharp\Affirma.ThreeSharp\Query\ThreeSharpQuery.cs:479

    Read the article

  • How can i assign a two dimensional array into other temporary two dimensional array.....?? in C Programming..

    - by AGeek
    Hi I am trying to store the contents of two dimensional array into a temporary array.... How is it possible... I don't want looping over here, as it would add an extra overhead.. Any pointer notation would be good. struct bucket { int nStrings; char strings[MAXSTRINGS][MAXWORDLENGTH]; }; void func() { char **tArray; int tLenArray = 0; for(i=0; i<TOTBUCKETS-1; i++) { if(buck[i].nStrings != 0) { tArray = buck[i].strings; tLenArray = buck[i].nStrings; } } } The error here i am getting is:- [others@centos htdocs]$ gcc lexorder.c lexorder.c: In function âlexSortingâ: lexorder.c:40: warning: assignment from incompatible pointer type Please let me know if this needs some more explanaition...

    Read the article

  • How to securely serve S3 files to blog

    - by Hugo Palma
    I'm starting a blog and i'm in the process of choosing where should i host it. For now i want a free solution like Blogger or Wordpress.com. The problem i'm facing is that i want to use files i have in a S3 bucket in my blog but none of the blog solutions i found supports any kind of server code, which means that in order to use S3 query string authentication i would have to put vulnerable information in the client. For obvious reasons i don't want to do that. So, i'm looking for ideas on how i can safely include content from S3 in a free blog host.

    Read the article

  • What languages, frameworks, and technologies have you used to implement document searching?

    - by Bill Brasky
    I am at a new company and one of our goals is to implement a document search portal for our team and our clients. I am a bit worried that if we use an external service provider like Salesforce or some other ECM in the cloud there will be a lot of integration work in the future. From a client perspective, these documents will also exist in the same bucket as our structured content (stored in the DB, not a MS Word doc). If you have implemented document searching, what languages, frameworks, and technologies have you used? Do you have any failure stories? I don't have a problem using something out of the box, but I think it is important that we have control over the documents and the API to access them. I would like to use Rails if we go fully custom.

    Read the article

  • Mysql replace() function, help with query (what chars do I escape?)

    - by jyoseph
    I am trying to update an old cms where images were stored in /images/editor/, they are now stored in a bucket on amazon s3. I'm trying to update the database using mysql replace. I've done this in the past with replacing simple words, but now Mysql is reporting an error, I suspect because this is more than a simple word: UPDATE contents SET desc = replace(desc, '/images/editor/', 'http://s3.amazonaws.com/my_bucket/editor/') Do I need to escape the : or slashes? I've tried escaping it with a '\' to no avail. Can someone get me pointed in the right direction? Thanks! Edit Here's the error I am getting, nothing too telling error : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'desc = replace(desc, '/images/editor', 'http://s3.amazonaws.com/app_navigator/ed' at line 1

    Read the article

  • s3cmd setacl to grant 'Authenticated Users'

    - by rynop
    I'm using jgit to create a remote in s3. Problem I'm having is when I do a jgit push s3 master it creates the files in s3 as owned by me. I want to keep the files private, and be read/write by 'Authenticated users'. I'd like to be able to either set acl: authenticated-read in the ~/.jgit file OR be able to modify the acl after the push: s3cmd --add-header=x-amz-acl:authenticated-read setacl --acl-private s3://my.bucket/repo/* Neither of these work. How do i use jgit and push to s3 and keep it private, and let anyone with auth read/write?

    Read the article

  • Map Reduce job on Amazon: argument for custom jar

    - by zero51
    Hi all, This is one of my first try with Map Reduce on AWS in its Management Console. Hi have uploaded on AWS S3 my runnable jar developed on Hadoop 0.18, and it works on my local machine. As described on documentation, I have passed the S3 paths for input and output as argument of the jar: all right, but the problem is the third argument that is another path (as string) to a file that I need to load while the job is in execution. That file resides on S3 bucket too, but it seems that my jar doesn't recognize the path and I got a FileNotFound Exception while it tries to load it. That is strange because this is a path exactly like the other two... Anyone have any idea? Thank you Luca

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13  | Next Page >