Search Results

Search found 296 results on 12 pages for 'bucket'.

Page 9/12 | < Previous Page | 5 6 7 8 9 10 11 12  | Next Page >

  • Need an encrypted online source code backup service.

    - by camelCase
    Please note this is not a question about online/hosted SVN services. I am working on a home based, solo developer, project that now has commercial significance and it is time to think about remote source code backup. There is no need for file level check in/out, all I need is once a day or once a week directory level snapshot to remote storage. Automatic encryption would be a bonus to protect my IP. What I have in mind is some sort of GUI interface app that will squirt a source code snapshot off to an Amazon S3 bucket on an automatic schedule. (My development PC runs on MS Windows.)

    Read the article

  • Are JSF 2.x ViewScoped Beans Thread Safe?

    - by Mark
    I've been googling for a couple hours on this issue to no eval. WELD docs and the CDI spec are pretty clear regarding thread safety of the scopes provided. For example: Application Scope - not safe Session Scope - not safe Request Scope - safe, always bound to a single thread Conversation Scope - safe (due to the WELD proxy serializing access from multiple request threads) I can't find anything on the View Scope defined by JSF 2.x. It is in roughly the same bucket as the Conversation Scope in that it is very possible for multiple requests to hit the scope concurrently despite it being bound to a single view / user. What I don't know is if the JSF implementation serializes access to the bean from multiple requests. Anyone have knowledge of the spec or of the Morraja/MyFaces implementations that could clear this up?

    Read the article

  • What does BucketAlreadyOwnedByYou error (from Amazon S3) actually mean? I can't find any reason affe

    - by Phyo Wai Win
    Hi there, I am using Amazon S3 to back up my Rails app's mysql database. And I am using astrails-safe plugin to do that and I got the "Your previous request to create the named bucket succeeded and you already own it. (AWS::S3::BucketAlreadyOwnedByYou)" error back whenever I try to update it. I have checked that the folder in which I am going to back up is there in my account already. It's just that I can't upload the files from the code (using astrails-safe). Any help would be appreciated! Thanks.

    Read the article

  • SQL query performance optimization (TimesTen)

    - by Sergey Mikhanov
    Hi community, I need some help with TimesTen DB query optimization. I made some measures with Java profiler and found the code section that takes most of the time (this code section executes the SQL query). What is strange that this query becomes expensive only for some specific input data. Here’s the example. We have two tables that we are querying, one represents the objects we want to fetch (T_PROFILEGROUP), another represents the many-to-many link from some other table (T_PROFILECONTEXT_PROFILEGROUPS). We are not querying linked table. These are the queries that I executed with DB profiler running (they are the same except for the ID): Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; < 1169655247309537280 > < 1169655249792565248 > < 1464837997699399681 > 3 rows found. Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; < 1169655247309537280 > 1 row found. This is what I have in the profiler: 12:14:31.147 1 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272 12:14:31.147 2 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:47) cmdType:100, cmdNum:1146695. 12:14:31.147 3 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.147 4 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 5 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 6 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 7 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 8 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:35.243 9 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928 12:14:35.243 10 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:44) cmdType:100, cmdNum:1146697. 12:14:35.243 11 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 12 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 13 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 14 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; It’s clear that the first query took almost 100ms, while the second was executed instantly. It’s not about queries precompilation (the first one is precompiled too, as same queries happened earlier). We have DB indices for all columns used here: T_PROFILEGROUP.M_ID, T_PROFILECONTEXT_PROFILEGROUPS.M_ID_OID and T_PROFILECONTEXT_PROFILEGROUPS.M_ID_EID. My questions are: Why querying the same set of tables yields such a different performance for different parameters? Which indices are involved here? Is there any way to improve this simple query and/or the DB to make it faster? UPDATE: to give the feeling of size: Command> select count(*) from T_PROFILEGROUP; < 183840 > 1 row found. Command> select count(*) from T_PROFILECONTEXT_PROFILEGROUPS; < 2279104 > 1 row found.

    Read the article

  • MAD method compression function

    - by Jacques
    I ran across the question below in an old exam. My answers just feels a bit short and inadequate. Any extra ideas I can look into or reasons I have overlooked would be great. Thanx Consider the MAD method compression function, mapping an object with hash code i to element [(3i + 7)mod9027]mod6000 of the 6000-element bucket array. Explain why this is a poor choice of compression function, and how it could be improved. I basically just say that the function could be improved by changing the value for p (or 9027) to an prime number and choosing an other constant for a (or 3) could also help.

    Read the article

  • Is there a dictionary about common programming vocabulary?

    - by _simon_
    When I need a name for a new class that extends behaviour of an existing class, I usually have hard time to come up with a name for it. For example, if I have a class MyClass, then the new class could be named something like MyClassAdapter, MyClassCalculator, MyClassDispatcher, MyClassParser,... This new name should of course represent the behaviour of the class and would ideally be same as the design pattern in which it is used (Adapter, Decorator, Factory,...). But since we don't overuse design patterns, this is not always the solution :) So, do you know for a dictionary or a list of common words, that we can use to represent the behaviour of the class, containing a short description of the expected behaviour? Some examples: replicator, shadow, token, acceptor, worker, mapper, driver, bucket, socket, validator, wrapper, parser, verifier,... You could also look at this list as a cheat sheet for metaphors, with which you can better understand your problem domain.

    Read the article

  • .NET & ASP vs PHP

    - by gargantaun
    Earlier today I asked wether it would be a good idea to develop websites using C#. Most of the answers pointed towards .NET and ASP. Currently I develop with PHP. I've dabbled with Python and RoR but I always come back to PHP. This is the first time I've looked at .NET and ASP. A bucket load of Google searches later I'm not really seeing much support for ASP online but then it all seems a bit Biased towards PHP/Apache/MySQL. It looks like there's a fair amount of .NET and ASP folk around here so I figured it's worth a shot asking for their input in attempt to try and address the balance in my own head. It can't all be bad. What advantages are there to .NET and ASP over PHP?

    Read the article

  • Amazon S3 copy request timeouts

    - by jean27
    We're uploading files to a temporary folder in a bucket. After that, we're trying to copy the uploaded files to its actual folder then delete the files in the temporary folder. It doesn't timeout when working with a single file. We're using the ThreeSharp API. Stack Trace: [WebException: The operation has timed out] System.Net.HttpWebRequest.GetRequestStream() +5322142 Affirma.ThreeSharp.Query.ThreeSharpQuery.GenerateAndSendHttpWebRequest(Request request) in C:\Consulting\Amazon\Amazon S3\Affirma.ThreeSharp\Affirma.ThreeSharp\Query\ThreeSharpQuery.cs:386 Affirma.ThreeSharp.Query.ThreeSharpQuery.Invoke(Request request) in C:\Consulting\Amazon\Amazon S3\Affirma.ThreeSharp\Affirma.ThreeSharp\Query\ThreeSharpQuery.cs:479

    Read the article

  • How can i assign a two dimensional array into other temporary two dimensional array.....?? in C Programming..

    - by AGeek
    Hi I am trying to store the contents of two dimensional array into a temporary array.... How is it possible... I don't want looping over here, as it would add an extra overhead.. Any pointer notation would be good. struct bucket { int nStrings; char strings[MAXSTRINGS][MAXWORDLENGTH]; }; void func() { char **tArray; int tLenArray = 0; for(i=0; i<TOTBUCKETS-1; i++) { if(buck[i].nStrings != 0) { tArray = buck[i].strings; tLenArray = buck[i].nStrings; } } } The error here i am getting is:- [others@centos htdocs]$ gcc lexorder.c lexorder.c: In function âlexSortingâ: lexorder.c:40: warning: assignment from incompatible pointer type Please let me know if this needs some more explanaition...

    Read the article

  • How to securely serve S3 files to blog

    - by Hugo Palma
    I'm starting a blog and i'm in the process of choosing where should i host it. For now i want a free solution like Blogger or Wordpress.com. The problem i'm facing is that i want to use files i have in a S3 bucket in my blog but none of the blog solutions i found supports any kind of server code, which means that in order to use S3 query string authentication i would have to put vulnerable information in the client. For obvious reasons i don't want to do that. So, i'm looking for ideas on how i can safely include content from S3 in a free blog host.

    Read the article

  • What languages, frameworks, and technologies have you used to implement document searching?

    - by Bill Brasky
    I am at a new company and one of our goals is to implement a document search portal for our team and our clients. I am a bit worried that if we use an external service provider like Salesforce or some other ECM in the cloud there will be a lot of integration work in the future. From a client perspective, these documents will also exist in the same bucket as our structured content (stored in the DB, not a MS Word doc). If you have implemented document searching, what languages, frameworks, and technologies have you used? Do you have any failure stories? I don't have a problem using something out of the box, but I think it is important that we have control over the documents and the API to access them. I would like to use Rails if we go fully custom.

    Read the article

  • Mysql replace() function, help with query (what chars do I escape?)

    - by jyoseph
    I am trying to update an old cms where images were stored in /images/editor/, they are now stored in a bucket on amazon s3. I'm trying to update the database using mysql replace. I've done this in the past with replacing simple words, but now Mysql is reporting an error, I suspect because this is more than a simple word: UPDATE contents SET desc = replace(desc, '/images/editor/', 'http://s3.amazonaws.com/my_bucket/editor/') Do I need to escape the : or slashes? I've tried escaping it with a '\' to no avail. Can someone get me pointed in the right direction? Thanks! Edit Here's the error I am getting, nothing too telling error : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'desc = replace(desc, '/images/editor', 'http://s3.amazonaws.com/app_navigator/ed' at line 1

    Read the article

  • s3cmd setacl to grant 'Authenticated Users'

    - by rynop
    I'm using jgit to create a remote in s3. Problem I'm having is when I do a jgit push s3 master it creates the files in s3 as owned by me. I want to keep the files private, and be read/write by 'Authenticated users'. I'd like to be able to either set acl: authenticated-read in the ~/.jgit file OR be able to modify the acl after the push: s3cmd --add-header=x-amz-acl:authenticated-read setacl --acl-private s3://my.bucket/repo/* Neither of these work. How do i use jgit and push to s3 and keep it private, and let anyone with auth read/write?

    Read the article

  • Map Reduce job on Amazon: argument for custom jar

    - by zero51
    Hi all, This is one of my first try with Map Reduce on AWS in its Management Console. Hi have uploaded on AWS S3 my runnable jar developed on Hadoop 0.18, and it works on my local machine. As described on documentation, I have passed the S3 paths for input and output as argument of the jar: all right, but the problem is the third argument that is another path (as string) to a file that I need to load while the job is in execution. That file resides on S3 bucket too, but it seems that my jar doesn't recognize the path and I got a FileNotFound Exception while it tries to load it. That is strange because this is a path exactly like the other two... Anyone have any idea? Thank you Luca

    Read the article

  • RSA encrypted data block size

    - by calccrypto
    how do you store an rsa encrypted data block? the output might be significantly greater than the original input data block size, and i dont think people waste memory by padding bucket loads of 0s in front of each data block. besides, how would they be removed? or is each block stored on new lines within the file? if that is the case, how would you tell the difference between legitimate new line and a '\n' char written into the file? what am i missing? im writing the "write to file" part in python, so maybe its one of the differences between: open(file,'w') open(file,'w+b') open(file,'wb') that i dont know. or is it something else?

    Read the article

  • Rails - Paperclip, getting width and height of image in model

    - by Corey Tenold
    Trying to get the width and height of the uploaded image while still in the model on the initial save. Any way to do this? Here's the snippet of code I've been testing with from my model. Of course it fails on "instance.photo_width". has_attached_file :photo, :styles => { :original => "634x471>", :thumb => Proc.new { |instance| ratio = instance.photo_width/instance.photo_height min_width = 142 min_height = 119 if ratio > 1 final_height = min_height final_width = final_height * ratio else final_width = min_width final_height = final_width * ratio end "#{final_width}x#{final_height}" } }, :storage => :s3, :s3_credentials => "#{RAILS_ROOT}/config/s3.yml", :path => ":attachment/:id/:style.:extension", :bucket => 'foo_bucket' So I'm basically trying to do this to get a custom thumbnail width and height based on the initial image dimensions. Any ideas?

    Read the article

  • Calculating percentiles in Excel with "buckets" data instead of the data list itself

    - by G B
    I have a bunch of data in Excel that I need to get certain percentile information from. The problem is that instead of having the data set made up of each value, I instead have info on the number of or "bucket" data. For example, imagine that my actual data set looks like this: 1,1,2,2,2,2,3,3,4,4,4 The data set that I have is this: Value No. of occurrences 1 2 2 4 3 2 4 3 Is there an easy way for me to calculate percentile information (as well as the median) without having to explode the summary data out to full data set? (Once I did that, I know that I could just use the Percentile(A1:A5, p) function) This is important because my data set is very large. If I exploded the data out, I would have hundreds of thousands of rows and I would have to do it for a couple of hundred data sets. Help!

    Read the article

  • ASP.NET Web API crash MSVCR90.dll

    - by user858931
    I have a web api app developed on VS2010. The application calls an external program to run and it runs just fine if I execute in VS2010. But then when I deploy the web api to IIS 7 and 7.5, in some cases, it crashes. Below is the details I got from Event Viewer/Appliation: Fault bucket , type 0 Event Name: BEX Response: Not available Cab Id: 0 Problem signature: P1: TestProgram.exe P2: 1.0.4728.17141 P3: 50c76dea P4: MSVCR90.dll P5: 9.0.30729.6161 P6: 4dace5b9 P7: 0003024a P8: c0000417 P9: 00000000 P10: Attached files: These files may be available here: C:\ProgramData\Microsoft\Windows\WER\ReportQueue\AppCrash_InferenceGenerat_95abb43ace91480da6b8f27f9937db667bc58f_7bb1549d Analysis symbol: Rechecking for solution: 0 Report Id: da8f304e-44c0-11e2-b4e8-0026b97a5242 Report Status: 0 Any idea why it happens and how to fix it? Thanks.

    Read the article

  • matrix = *((fxMatrix*)&d3dMatrix); //Evil?

    - by Xilliah
    I've been using matrix = *((fxMatrix*)&d3dMatrix); for quite a while. It worked fine until my screen turned black and received a bucket of frustration on my desk. fxMatrix contains 4 fxVectors. fxVector used to be 16 bytes, but now it was suddenly 20. This was because it inherited fxStreamable, which added the vTable. So one solution is of course just to not inherit fxStreamable, and leave a comment saying that it must always be 16 bytes and never more. Another solution would be to make conversion functions, and copy the matrix completely. This makes it more secure, but has an impact on the performance. I suppose this is the best idea. Another solution is to not convert at all, and stick to D3DXMATRIX, but this makes the engine inconsistent and I personally really dislike this idea. What is your opinion?

    Read the article

  • Mystery in Ruby sinatra

    - by JVK
    I have the following Sinatra code: post '/bucket' do # determine if this call is coming from filling out web form is_html = request.content_type.to_s.downcase.eql?('application/x-www-form-urlencoded') # If this is a curl call, then get the params differently unless is_html params = JSON.parse(request.env["rack.input"].read) end p params[:name] end If I call this using Curl, params has values, but when this is called via a web form, then params is nil and params[:name] has nothing. I spent several hours figuring out why it happens and asked help from other people, but no one could really find out what is going on. One thing to note is, if I comment out this line: params = JSON.parse(request.env["rack.input"].read) then params has the correct value for "web-form" posting. Actually, the goal is to get the params value if this code is being called by CURL call, so I used: params = JSON.parse(request.env["rack.input"].read) but it messed up the web-form posting. Can anyone solve this mystery?

    Read the article

  • Can hash tables really be O(1)

    - by drawnonward
    It seems to be common knowledge that hash tables can achieve O(1) but that has never made sense to me. Can someone please explain it? A. The value is an int smaller than the size of the hash table, so the value is its own hash, so there is no hash table but if there was it would be O(1) and still be inefficient. B. You have to calculate the hash, so the order is O(n) for the size of the data being looked up. The lookup might be O(1) after you do O(n) work, but that still comes out to O(n) in my eyes. And unless you have a perfect hash or a large hash table there are probably several items per bucket so it devolves into a small linear search at some point anyway. I think hash tables are awesome, but I do not get the O(1) designation unless it is just supposed to be theoretical.

    Read the article

  • Sort vector<int>(n) in O(n) time using O(m) space?

    - by Adam
    I have a vector<unsigned int> vec of size n. Each element in vec is in the range [0, m], no duplicates, and I want to sort vec. Is it possible to do better than O(n log n) time if you're allowed to use O(m) space? In the average case m is much larger than n, in the worst case m == n. Ideally I want something O(n). I get the feeling that there's a bucket sort-ish way to do this: unsigned int aux[m]; aux[vec[i]] = i; Somehow extract the permutation and permute vec. I'm stuck on how to do 3. In my application m is on the order of 16k. However this sort is in the inner loops and accounts for a significant portion of my runtime.

    Read the article

  • SqlBuildTask failed due to ArgumentNullException(searchingPaths)

    At one of my customers, they have setup TFS 2010. They are using the UpgradeTemplate.xaml to build all their solutions, including GDR2 database projects. When building the project, I got the following error message DspBuild:   Creating a model to represent the project... C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: The "SqlBuildTask" task failed unexpectedly. [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: System.ArgumentNullException: Value cannot be null. [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018: Parameter name: searchingPaths [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionAssemblyResolver..ctor(List`1 searchingPaths) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionTypeLoader.LoadTypes() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Extensibility.ExtensionManager..ctor(String databaseSchemaProviderType) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.TaskHostLoader.LoadImpl(ITaskHost providedHost, TaskLoggingHelper providedLogger) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.TaskHostLoader.Load(ITaskHost providedHost, TaskLoggingHelper providedLogger) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Data.Schema.Tasks.DBBuildTask.Execute() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(58,5): error MSB4018:    at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask, Boolean& taskResult) [C:\Builds\9\62\Sources\MyDb\MyDb.dbproj] Solution To solve this error you set the MSBuild Platform in the Build Defintion to X86:

    Read the article

  • Fix: SqlDeploy Task Fails with NullReferenceException at ExtractPassword

    Still working on getting a TeamCity build working (see my last post).  Latest exception is: C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(120, 5): error MSB4018: The "SqlDeployTask" task failed unexpectedly. System.NullReferenceException: Object reference not set to an instance of an object. at Microsoft.Data.Schema.Common.ConnectionStringPersistence.ExtractPassword(String partialConnection, String dbProvider) at Microsoft.Data.Schema.Common.ConnectionStringPersistence.RetrieveFullConnection(String partialConnection, String provider, Boolean presentUI, String password) at Microsoft.Data.Schema.Sql.Build.SqlDeployment.ConfigureConnectionString(String connectionString, String databaseName) at Microsoft.Data.Schema.Sql.Build.SqlDeployment.OnBuildConnectionString(String partialConnectionString, String databaseName) at Microsoft.Data.Schema.Build.Deployment.FinishInitialize(String targetConnectionString) at Microsoft.Data.Schema.Build.Deployment.Initialize(FileInfo sourceDbSchemaFile, ErrorManager errors, String targetConnectionString) at Microsoft.Data.Schema.Build.DeploymentConstructor.ConstructServiceImplementation() at Microsoft.Data.Schema.Extensibility.ServiceConstructor'1.ConstructService() at Microsoft.Data.Schema.Tasks.DBDeployTask.Execute() at Microsoft.Build.BuildEngine.TaskEngine.ExecuteInstantiatedTask(EngineProxy engineProxy, ItemBucket bucket, TaskExecutionMode howToExecuteTask, ITask task, Boolean& taskResult)   This time searching yielded some good stuff, including this thread that talks about how to resolve this via permissions.  The short answer is that the account that your build server runs under needs to have the necessary permissions in SQL Server.  Youll need to create a Login and then ensure at least the minimum rights are configured as described here: Required Permissions in Database Edition Alternately, you can just make your buildserver account an admin on the database (which is probably running on the same machine anyway) and at that point it should be able to do whatever it needs to. If youre certain the account has the necessary permissions, but youre still getting the error, the problem may be that the account has never logged into the build server.  In this case, there wont be any entry in the HKCU hive in the registry, which the system is checking for permissions (see this thread).  The solution in this case is quite simple: log into the machine (once is enough) with the build server account.  Then, open Visual Studio (thanks Brendan for the answer in this thread). Summary Make sure the build service account has the necessary database permissions Make sure the account has logged into the server so it has the necessary registry hive info Make sure the account has run Visual Studio at least once so its settings are established In my case I went through all 3 of these steps before I resolved the problem. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Broken package after update: linux-headers, error brokencount >0

    - by escozul
    Ubuntu 12.04. After an update, I get a red warning icon in the system tray, warning about an error: broken count 0 Opening Update manager, I see that the broken package is linux-headers-3.2.0-33-generic-pae (new install) Specificaly I have my ubuntu on an AspireOne with 8gb internal storage. I tried apt-get clean as suggested in another question on this site, and tried reinstalling the package in Synaptic. I have tried to reboot but to no avail. I have also tried apt-get install --fix-broken and I get the following: sudo apt-get install --fix-broken [sudo] password for elina: Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: linux-headers-3.2.0-33-generic-pae The following NEW packages will be installed: linux-headers-3.2.0-33-generic-pae 0 upgraded, 1 newly installed, 0 to remove and 38 not upgraded. 1 not fully installed or removed. Need to get 0 B/977 kB of archives. After this operation 11,3 MB of additional disk space will be used. Do you want to continue [Y/n]; y (Reading database ... 437051 files and directories currently installed.) Unpacking linux-headers-3.2.0-33-generic-pae (from .../linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb) ... dpkg: error processing /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb (--unpack): unable to create `/usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h.dpkg-new' (while processing `./usr/src/linux-headers-3.2.0-33-generic-pae/include/config/usb/gspca/sonixb.h'): No space left on device No apport report written because the error message indicates a disk full error dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried all suggestions I could find: sudo apt-get clean sudo apt-get autoclean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install sudo apt-get install --fix-broken Then I saw that on the error there was a mention about free space. So I did a df -h and the result was: Filesystem Size Used Avail Use% Mounted on /dev/sda1 7,0G 5,5G 1,1G 84% / udev 235M 4,0K 235M 1% /dev tmpfs 97M 816K 96M 1% /run none 5,0M 0 5,0M 0% /run/lock none 242M 352K 242M 1% /run/shm I see that on my root folder I have 1.1Gb free. The broken package is linux-headers-3.2.0-33-generic-pae_3.2.0-33.52_i386.deb which only takes up 11.3Mb on my hard drive. I'm soooo lost. I really hope there is something I'm missing here. I don't want to go about reformatting this bucket. It's really not worth the time. Any help for fixing this would be hot.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12  | Next Page >