Search Results

Search found 11618 results on 465 pages for 'shared storage'.

Page 404/465 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • How to prevent parallel builds per build configuration across multiple Build Agents

    - by vanslly
    I have many build configurations in TeamCity, each servicing a large project. In the past if a build is kicked off the Build Agent could be busy for up to 20min! In order to improve throughput I installed a second Build Agent on the same machine such that if a build run is kicked off by say Build Agent 1 and it is busy for 20min and someone from another project makes a change then Build Agent 2 can do the build for the other project without needing to wait on the current build run to finish. All was well until two successive check-ins resulted in both Build Agents running a build for a single build configuration in parallel. Since some resources are shared, IIS directories & databases, I don't want a single build configuration to run on both Build Agents in parallel. How can I ensure a build isn't triggered if a build is currently running for that build configuration on a different build agent? One way seems to involve environmental variables and ensuring a 50/50 split by Build Agent in terms of build configuration compatibility, but that seems a little clunky.

    Read the article

  • categorize a set of phrases into a set of similar phrases

    - by Dingo
    I have a few apps that generate textual tracing information (logs) to log files. The tracing information is the typical printf() style - i.e. there are a lot of log entries that are similar (same format argument to printf), but differ where the format string had parameters. What would be an algorithm (url, books, articles, ...) that will allow me to analyze the log entries and categorize them into several bins/containers, where each bin has one associated format? Essentially, what I would like is to transform the raw log entries into (formatA, arg0 ... argN) instances, where formatA is shared among many log entries. The formatA does not have to be the exact format used to generate the entry (even more so if this makes the algo simpler). Most of the literature and web-info I found deals with exact matching, a max substring matching, or a k-difference (with k known/fixed ahead of time). Also, it focuses on matching a pair of (long) strings, or a single bin output (one match among all input). My case is somewhat different, since I have to discover what represents a (good-enough) match (generally a sequence of discontinuous strings), and then categorize each input entries to one of the discovered matches. Lastly, I'm not looking for a perfect algorithm, but something simple/easy to maintain. Thanks!

    Read the article

  • TFS Solution build cascading to several other builds even when common components were not modified

    - by Bob Palmer
    Hey all, here is the issue I am currently trying to work through. We are using Team Foundation Server 2008, and utilizing the automated build support out of the box. We have one very large project that encompasses a number of interrelated components and web sites, each of which is set up as a Visual Studio solution file. Many of these solutions are highly interrelated since they may contain applications, or contain common libraries or shared components. We have roughly 20 or so applications, three large web sites, and about 20 components. Each solution may include projects from other solutions. For example, a solution for a console app would also include the project files for all of the components it utilizes, since we need to ensure that when someone changes a component and rebuilds it, it is reflected in all of the projects that consume that component, and we can make sure nothing was broken. We have build projects for each solution, whether that's an application, component, or web site. For this example, we will call them solutions 01, 02, and 03. These reference multiple projects (both their own core project and test projects, plus the projects relating to various components). Solution 01 has projects A, B, and C. Solution 02 has projects C, D, and E. Solution 03 has projects E, F, and G. Now, for the problem. If I modify project A, the system will end up rebuilding all three solutions. Worse, all thirty solutions reference common projects used for data access (let's call it project H). Because they all share one project in common, if I modify any solution in my stack, even if it does not touch project H, I still end up kicking off every single build script. Any thoughts on how to address this? Ideally I would only want to kick off builds where their constituant projects were directly modified - i.e in the example below, if I modified project C, I would only rebuild solutions 01 and 02. Thanks!

    Read the article

  • How to put large text data (~20mb) into sql cs 3.5 database?

    - by Anindya Chatterjee
    I am using following query to insert some large text data : internal static string InsertStorageItem = "insert into Storage(FolderName, MessageId, MessageDate, StorageData) values ('{0}', '{1}', '{2}', @StorageData)"; and the code I am using to execute this query is as follows : string content = "very very large data"; string query = string.Format(InsertStorageItem, "Inbox", "AXOGTRR1445/DSDS587444WEE", "4/19/2010 11:11:03 AM"); var command = new SqlCeCommand(query, _sqlConnection); var paramData = command.Parameters.Add("@StorageData", System.Data.SqlDbType.NText); paramData.Value = content; paramData.SourceColumn = "StorageData"; command.ExecuteNonQuery(); But at the last line I am getting this following error : System.Data.SqlServerCe.SqlCeException was unhandled by user code Message=The data was truncated while converting from one data type to another. [ Name of function(if known) = ] Source=SQL Server Compact ADO.NET Data Provider HResult=-2147467259 NativeError=25920 StackTrace: at System.Data.SqlServerCe.SqlCeCommand.ProcessResults(Int32 hr) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommandText(IntPtr& pCursor, Boolean& isBaseTableCursor) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommand(CommandBehavior behavior, String method, ResultSetOptions options) at System.Data.SqlServerCe.SqlCeCommand.ExecuteNonQuery() at Chithi.Client.Exchange.ExchangeClient.SaveItem(Item item, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.DownloadNewMails(Folder folder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeParentChildFolder(WellKnownFolder wellknownFolder, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeFolders() at Chithi.Client.Exchange.ExchangeClient.WorkerThreadDoWork(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) InnerException: Now my question is how am I supposed to insert such large data to sqlce db? Regards, Anindya Chatterjee http://abstractclass.org

    Read the article

  • Hybrid EAV/CR model via WCF (and statically-typed language)?

    - by Pat
    Background I'm working on the architecture for a cloud-based LOB application, using Silverlight for the client, WCF, ASP.NET/C# for server and SQL Server for storage. The data model requires some flexibility per user (ability to add custom properties and define validation rules for them, for example), and a hybrid EAV/CR persistence model on the server side will suit nicely. Problem I need an efficient and maintainable technology and approach to handle the transformation from the persisted EAV model to/from WCF (and similarly allow the client to bind to the resulting data - DataGrid is a key UI element)? Admission: I don't yet know enough about WCF to understand if it supports ExpandoObject directly, but I suspect it will. Options I started off looking at WCF RIA services, but quickly discovered they're heavily dependent upon both static type data and compile-time code generation. Neither of these appeal. The options I'm considering include: Using WCF RIA services and pass the data over the network directly in EAV form (i.e. Dictionary), and handle the binding issue purely on the client side (like this) Using a dynamic language (probably IronPython) to handle both ends of the communication, with plumbing to generate the necessary CLR type data on the client to allow binding, and transform to/from EAV form on the server (spam preventer stopped me from posting a URL here, I'll try it in a comment). Dynamic LINQ (CreateClass() and friends), although I'm way out of my depth there and don't know what the limitations on that approach might be yet. I'm interested in comments on these approaches as well as alternative approaches that might solve the problem. Other Notes The Silverlight client will not be the only consumer of the service, making me slightly uncomfortable with option #1 above. While the data model is flexible, it's not expected to be modified heavily. For argument's sake, we could assume that we might have 25 distinct data models active at a given time, with something like 10-20 unique data fields/rules each. Modifications to the data model will happen infrequently (typically when a new user is initially configured).

    Read the article

  • error C3662: override specifier 'new' only allowed on member functions of managed classes

    - by William
    Okay, so I'm trying to override a function in a parent class, and getting some errors. here's a test case #include <iostream> using namespace std; class A{ public: int aba; void printAba(); }; class B: public A{ public: void printAba() new; }; void A::printAba(){ cout << "aba1" << endl; } void B::printAba() new{ cout << "aba2" << endl; } int main(){ A a = B(); a.printAba(); return 0; } And here's the errors I'm getting: Error 1 error C3662: 'B::printAba' : override specifier 'new' only allowed on member functions of managed classes c:\users\test\test\test.cpp 12 test Error 2 error C2723: 'B::printAba' : 'new' storage-class specifier illegal on function definition c:\users\test\test\test.cpp 19 test How the heck do I do this?

    Read the article

  • Drawing only part of a texture OpenGL ES iPhone

    - by Ben Reeves
    ..Continued on from my previous question I have a 320*480 RGB565 framebuffer which I wish to draw using OpenGL ES 1.0 on the iPhone. - (void)setupView { glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_CROP_RECT_OES, (int[4]){0, 0, 480, 320}); glEnable(GL_TEXTURE_2D); } // Updates the OpenGL view when the timer fires - (void)drawView { // Make sure that you are drawing to the current context [EAGLContext setCurrentContext:context]; //Get the 320*480 buffer const int8_t * frameBuf = [source getNextBuffer]; //Create enough storage for a 512x512 power of 2 texture int8_t lBuf[2*512*512]; memcpy (lBuf, frameBuf, 320*480*2); //Upload the texture glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, 512, 512, 0, GL_RGB, GL_UNSIGNED_SHORT_5_6_5, lBuf); //Draw it glDrawTexiOES(0, 0, 1, 480, 320); [context presentRenderbuffer:GL_RENDERBUFFER_OES]; } If I produce the original texture in 512*512 the output is cropped incorrectly but other than that looks fine. However using the require output size of 320*480 everything is distorted and messed up. I'm pretty sure it's the way I'm copying the framebuffer into the new 512*512 buffer. I have tried this routine int8_t lBuf[512][512][2]; const char * frameDataP = frameData; for (int ii = 0; ii < 480; ++ii) { memcpy(lBuf[ii], frameDataP, 320); frameDataP += 320; } Which is better, but the width appears to be stretched and the height is messed up. Any help appreciated.

    Read the article

  • Help decoupling Crystal Report from CrystalReportViewer

    - by John at CashCommons
    I'm using Visual Studio 2005 with VB.NET. I have a number of Crystal Reports, each with their own associated dialog resource containing a CrystalReportViewer. The class definitions look like this: Imports System.Windows.Forms Imports CrystalDecisions.CrystalReports.Engine Imports CrystalDecisions.Shared Public Class dlgForMyReport Private theReport As New myCrystalReport Public theItems As New List(Of MyItem) Private Sub OK_Button_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles OK_Button.Click Me.DialogResult = System.Windows.Forms.DialogResult.OK Me.Close() End Sub Private Sub Cancel_Button_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Cancel_Button.Click Me.DialogResult = System.Windows.Forms.DialogResult.Cancel Me.Close() End Sub Private Sub dlgForMyReport_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load theReport.SetDataSource(theItems) 'Do a bunch of stuff here to set data items in theReport Me.myCrystalReportViewer.ReportSource = theReport End Sub End Class I basically instantiate the dialog, set theItems to the list I want, and call ShowDialog. I now have a need to combine several of these reports into one report (possibly like this) but the code that loads up the fields in the report is in the dialog. How would I go about decoupling the report initialization from the dialog? Thanks!

    Read the article

  • Windows Workflow Foundation: Recommendations how to design architecture

    - by Petr Felzmann
    We are running several the same ASP.NET applications (one per customer) based on our custom framework (libraries). Each application use its own database (Initial Catalog in the term of connection string). Now we would like to add workflow capability (of course 4.0 ;) to the applications. So the particular workflows will be the same for all the applications only some initial settings of each workflow can vary, e.g. in one application the e-mail will be send to the user X, but in other application to the user Y. I have several general questions how to design architecture: (1) Can be the workflow database shared for all the applications? (2) Where to host workflow engine - inside our custom windows NT service or inside IIS? What are the criteria to choose the right host? (3) How the workflow engine should communicate with applications? Should application call some WCF endpoint API configured in workflow host or vice verse - should each application provide WCF endpoint API and workflow engine will call it? How then the workflow engine will identify applications? Both cases requires probably some application identifier as a parameter in API calls? (4) We would like to also store some information to the application databases based on the workflow states. Is it possible? Thanks for suggestions!

    Read the article

  • php user authentication libraries / frameworks ... what are the options?

    - by es11
    I am using PHP and the codeigniter framework for a project I am working on, and require a user login/authentication system. For now I'd rather not use SSL (might be overkill and the fact that I am using shared hosting discourages this). I have considered using openID but decided that since my target audience is generally not technical, it might scare users away (not to mention that it requires mirroring of login information etc.). I know that I could write a hash based authentication (such as sha1) since there is no sensitive data being passed (I'd compare the level of sensitivity to that of stackoverflow). That being said, before making a custom solution, it would be nice to know if there are any good libraries or packages out there that you have used to provide semi-secure authentication? I am new to codeigniter, but something that integrates well with it would be preferable. Any ideas? (i'm open to criticism on my approach and open to suggestions as to why I might be crazy not to just use ssl). Thanks in advance. Update: I've looked into some of the suggestions. I am curious to try out zend-auth since it seems well supported and well built. Does anyone have experience with using zend-auth in codeigniter (is it too bulky?) and do you have a good reference on integrating it with CI? I do not need any complex authentication schemes..just a simple login/logout/password-management authorization system. Also, dx_auth seems interesting as well, however I am worried that it is too buggy. Has anybody else had success with this? I realized that I would also like to manage guest users (i.e. users that do not login/register) in a similar way to stackoverflow..so any suggestions that have this functionality would be great

    Read the article

  • PHP CodeIgniter Framework - Thoughts on developing with it?

    - by Sootah
    I've been reviewing different frameworks to use for my next couple of major web applications, and after days of research am almost set on using CodeIgniter. The reason I'm leaning towards CI is that so far it looks to be the best suited for me. It doesn't require constant command-line access (I am currently using shared hosting; the projects do not warrant a dedicate server yet), nothing special has to be installed on the server running it (you just upload the framework to the root of whatever your developing), and they appear to have some excellent documentation, videos, and tutorials on how to get started. Do any of you have experience with CodeIgniter? If so, what is your opinion of it and its features? What had you developed with it, and what types of applications is it best suited to create? I certainly don't want to get into a situation where I'm trying to bend a framework to do something that it isn't well-suited for. Both of my projects will be database-driven apps that will require user registration, the ability to manipulate data that is specific to their account (their posts, listings, user account details, etc), amongst other things. Also, if you have any other PHP framework suggestions, I am open to them. Thanks in advance for your help! -Sootah

    Read the article

  • C++ : Avoid lot of boolean variable for multiple verification conditions in trading app

    - by Naveen
    Hi i am a junior dev in trading app... we have a order refresh verification unit. It has to verify order confirmation from exchange. We send a bunch of different request in bulk ( NEW, MODIFY, CANCEL ) to exchange... Verification has to happen for max N times with each T intervals for all orders. if verification successful for all the order before N retry then fine.. otherwise we need to indicate as verification unsuccessfull. i hv done a basic coding done in very urgent like below for( N times ) { for_each ( sent_request_order ) // SENT { 1) get all the refreshed order from DB or shared mem i.e REFRESHED 2) find current sent order in REFRESHED if( not_found ) not refreshed from exchange, continue to next order if( found ) case NEW : //check for new status, mark verification done case MODIFY : //check for modified status.. //if not mark pending, go to next order, //revisit the same after T time case CANCEL : //check for cancelled status.. //if not mark pending, go to next order, //revisit the same after T time } if( all_verified ) exit from verification. wait ( T sec ) } order_verification_pending, order_verification_done, order_visited, order_not_visited, all_verified, all_not_verified ... lot of boolean flags used for indication.. is there any better approach for doing this.... splitting responsibilities across the classes......???? i know this is not a general question.... but still flags are making me tidious to handle...

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • need a near 100% uptime third-party web-accesible hosting for static web resources

    - by Jared Henderson
    I hope this makes sense: my business sells a website template, we currently have about 10,000 users. For various reasons that are unimportant to this question, I try to keep the file size of the zipped template we give them as small as possible. Because of this, I have taken a bunch of images and a couple of static files used by the template and moved them to external hosting. They are referenced by absolute URL in the css and markup, instead of shipping all of those images and files with every template. So, basically 10,000+ and growing users are requesting images and files from a third-party host. I don't use my own webhosting for this because I still kind of use a medium-cheap shared hosting for my website, and if it goes down, 10,000+ users are potentially effected. Currently I'm having the template directly access files inside of an open-source google-code project that I created for just this purpose. But, that seems like a bastardization of what a google-code repository is for, and plus, google code (i've found out) often spews 502 bad gateway errors for hours at a time. So, anyway, my question is: where is the right kind of place to host these? Obviously I'm willing to pay. My main needs are speed and uptime, since the images and files are being requested from thousands of different websites every day. Is this something that I should use Amazon S3 for? I'm guessing there's some kind of service exactly for this kind of need, but I'm at a loss to figure out what it is.

    Read the article

  • How to prevent linq-to-sql designer undo my changing

    - by anonim.developer
    Dear All, Thanks for your attention in advance, I’ve met an issue with LINQ-2-SQL designer in VS 2008 SP1 which has made me CRAZY. I use Linq2sql as my DAL. It seems Linq2sql speeds up coding in the first step but lots of issues arise in feature specifically with table or object inheritance. In this case I have a class Entity that all other entity classes generated by Linq2sql designer inherit from. public abstract class Entity { public virtual Guid ID { get; protected set; } } public partial class User : monius.Data.Entity { } And the following generated by L2S designer (DataModel.designer.cs) [Column(Storage = "_ID", AutoSync = AutoSync.OnInsert, DbType = "UniqueIdentifier NOT NULL", IsPrimaryKey = true, IsDbGenerated = true, UpdateCheck = UpdateCheck.Never)] [DataMember(Order = 1)] public System.Guid ID { get { return this._ID; } set { if ((this._ID != value)) { this.OnIDChanging(value); this.SendPropertyChanging(); this._ID = value; this.SendPropertyChanged("ID"); this.OnIDChanged(); } } } When I compile the code VS warns me that Warning 1 'User.ID' hides inherited member 'Entity.ID'. To make the current member override that mplementation, add the override keyword. Otherwise add the new keyword. That warning is obvious and I have to change the code generated by L2S designer (DataModel.designer.cs) to […] public override System.Guid ID { … protected set … } And the code compiled with no error or warning and everyone is happy. But that is not the end of story. As soon as I made changes to entities of the diagram (dbml) or even I open dbml file to view it, any change manually I made to designer has been vanished and POOF! Redo AGAIN. That is a painful job. Now I wonder if there is a way to force L2S designer not changing portions of auto-generated code. I’ll be appreciated if someone kindly helps me with this issue.

    Read the article

  • Port Win32 DLL hook to Linux

    - by peachykeen
    I have a program (NWShader) which hooks into a second program's OpenGL calls (NWN) to do post-processing effects and whatnot. NWShader was originally built for Windows, generally modern versions (win32), and uses both DLL exports (to get Windows to load it and grab some OpenGL functions) and Detours (to hook into other functions). I'm using the trick where Win will look in the current directory for any DLLs before checking the sysdir, so it loads mine. I have on DLL that redirects with this method: #pragma comment(linker, "/export:oldFunc=nwshader.newFunc) To send them to a different named function in my own DLL. I then do any processing and call the original function from the system DLL. I need to port NWShader to Linux (NWN exists in both flavors). As far as I can tell, what I need to make is a shared library (.so file). If this is preloaded before the NWN executable (I found a shell script to handle this), my functions will be called. The only problem is I need to call the original function (I would use various DLL dynamic loading methods for this, I think) and need to be able to do Detour-like hooking of internal functions. At the moment I'm building on Ubuntu 9.10 x64 (with the 32-bit compiler flags). I haven't been able to find much on Google to help with this, but I don't know exactly what the *nix community refers to it as. I can code C++, but I'm more used to Windows. Being OpenGL, the only part the needs modified to be compatible with Linux is the hooking code and the calls. Is there a simple and easy way to do this, or will it involve recreating Detours and dynamically loading the original function addresses?

    Read the article

  • XML: what processing rules apply for values intertwined with tags?

    - by iCE-9
    I've started working on a simple XML pull-parser, and as I've just defuzzed my mind on what's correct syntax in XML with regards to certain characters/sequences, ignorable whitespace and such (thank you, http://www.w3schools.com/xml/xml_elements.asp), I realized that I still don't know squat about what can be sketched up as the following case (which Validome finds well-formed very much; note that I only want to use xml files for data storage, no entities, DTD or Schemas needed): <bookstore> <book id="1"> <author>Kurt Vonnegut Jr.</author> <title>Slapstick</title> </book> We drop a pie here. <book id="2">Who cares anyway? <author>Stephen King</author> <title>The Green Mile</title> </book> And another one here. <book id="3"> <author>Next one</author> <title>This time with its own title</title> </book> </bookstore> "We drop a pie here." and "And another one here." are values of the 'bookstore' element. "Who cares anyway?" is a value related to the second 'book' element. How are these processed, if at all? Will "We drop a pie here." and "Another one here." be concatenated to form one value for the 'bookstore' element, or are they treated separately, stored somewhere, affecting the outcome of the parsing of the element they belong to, or...?

    Read the article

  • ASP.NET application - Error when trying to connect to a SQL Server 2008 instance

    - by Pablo Dami
    Hi everyone! Despite that I’m a regular reader of this great forum, this is my first post on it. I believe that this community can help me with the following problem that I have. I’m trying to publish an ASP.NET website over an IIS 6.0 (Windows 2003 Server), and I have some troubles trying to connect to the database. Curiously, I have installed another ASP.NET website into the same IIS 6.0 with the same properties and security parameters and can connect without problems with the same database. The application that works fine is almost the same that the one that can’t connect with SQL Server (actually is the same but with several modifications). I’ll enumarate some information related to the problem: S.O: Windows 2003 Server SQL Server Engine: SQL Server 2008 SQL Server accept remote connections? Yes. ASP.NET version: 2.0.50727 The connections via TCP/IP are enabled to the SQL Server instance? Yes. The corresponding user that I have in the connection string, actually exists into the database with the “owner” role? Yes. ORM Tool used: nHibernate I get the following error when I try to run the aplication into the browser: Error while establishing a connection to the server. When connecting to SQL Server 2005, this failure may occur because the default settings SQL Server does not allow remote connections. (provider: Shared Memory Provider, error: 40 - Could not open a connection to SQL Server) In order to isolate the problem, I made some test. For example, using the web app that works fine I can connect without any problema with the database that uses the web app that can’t. With this evidence I concluded that the problem is within the web app and not into the SQL Server instance. I also google it my problem but sadly I can't find nothing usefull to solve it. If someone can help me I’ll appreciate that. Thank you so much for your time!

    Read the article

  • ASP.NET Memory Usage in IIS is FAR greater than in DevEnv. Is this normal?

    - by Tom
    Greetings! I have an ASP.NET app that scrapes data from a handful of external pages, parses the relevant bits and displays them in a table. Total data retrieved is 3-4MB and the resulting page is about 1MB. I am using synchronous WebRequest GetResponse for the retrieval, but the same problem existed using an asynchronous BeginGetResponse/EndGetResponse process. There is no database access, no session storage, no caching, but an in-memory list of about 100 objects (total 1MB of data), plus a good amount of AJAX (AjaxControlToolkit). This issue appears on the very first run of the app, even if I have restarted IIS. The issue: When I run the app on my dev computer, the maximum commit charge is about 1.5GB. The biggest user, measured by Task Manager's VM Size, is WebDev.WebServer.exe (600MB). The app runs perfectly. When I run it on my rent-a-server (IIS 7.5, 1GB RAM), the maximum commit charge is over 3.8GB. The biggest user is w3wp.exe at 2.7GB. IIS grinds to a halt and spits out a timed-out error page. Given my limited server budget and the hope of having multiple simultaneous users, I'm kind of in a panic. Is this normal? If I bump the server RAM up to 4GB, will that be enough? Will multiple users require even more memory? Could the culprit be AJAX or the list of objects? Thanks for any insight you can provide.

    Read the article

  • Amazon access key showing in URL for Carrierwave and Fog

    - by kcurtin
    I just switched from storing my images uploaded via Carrierwave locally to using Amazon s3 via the fog gem in my Rails 3.1 app. While images are being added, when I click on an image in my application, the URL is providing my access key and a signature. Here is a sample URL (XXX replaced the string with the info): https://s3.amazonaws.com/bucketname/uploads/photo/image/2/IMG_4842.jpg?AWSAccessKeyId=XXX&Signature=XXX%3D&Expires=1332093418 This is happening in development (localhost:3000) and when I am using heroku for production. Here is my uploader: class ImageUploader < CarrierWave::Uploader::Base include CarrierWave::RMagick storage :fog def store_dir "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}" end process :convert => :jpg process :resize_to_limit => [640, 640] version :thumb do process :convert => :jpg process :resize_to_fill => [280, 205] end version :avatar do process :convert => :jpg process :resize_to_fill => [120, 120] end end And my config/initializers/fog.rb : CarrierWave.configure do |config| config.fog_credentials = { :provider => 'AWS', :aws_access_key_id => 'XXX', :aws_secret_access_key => 'XXX', } config.fog_directory = 'bucketname' config.fog_public = false end Anyone know how to make sure this information isn't available?

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • How would you go about parsing markdown?

    - by John Leidegren
    You can find the syntax here. The thing is, the source that follows with the download is written in perl. Which I have no intentions of honoring. It is riddled with regex and it relies on MD5 hashes to escape certain characters. Something is just wrong about that! I'm about to hard code a parser for markdown and I'm wonder if someone had some experience with this? Edit: If you don't have anything meaningful to say about the actual parsing of markdown, spare me the time. (This might sound harsh, but yes, I'm looking for insight, not a solution i.e. third-party library). To help a bit with the answers, regex are meant to identify patterns! NOT to parse an entire grammar. That people consider doing so is foobar. If you think about markdown, it's fundamentally based around the concept of paragraphs. As such, a reasonable approach might be to split the input into paragraphs. There are many kinds of paragraphs e.g. heading, text, list, blockquote, code. The challenge is thus to identify these paragraphs and in what context they occur. I'll be back with a solution, once I find it's worthy to be shared.

    Read the article

  • Can I perform a search on mail server in Java?

    - by twofivesevenzero
    I am trying to perform a search of my gmail using Java. With JavaMail I can do a message by message search like so: Properties props = System.getProperties(); props.setProperty("mail.store.protocol", "imaps"); Session session = Session.getDefaultInstance(props, null); Store store = session.getStore("imaps"); store.connect("imap.gmail.com", "myUsername", "myPassword"); Folder inbox = store.getFolder("Inbox"); inbox.open(Folder.READ_ONLY); SearchTerm term = new SearchTerm() { @Override public boolean match(Message mess) { try { return mess.getContent().toString().toLowerCase().indexOf("boston") != -1; } catch (IOException ex) { Logger.getLogger(JavaMailTest.class.getName()).log(Level.SEVERE, null, ex); } catch (MessagingException ex) { Logger.getLogger(JavaMailTest.class.getName()).log(Level.SEVERE, null, ex); } return false; } }; Message[] searchResults = inbox.search(term); for(Message m:searchResults) System.out.println("MATCHED: " + m.getFrom()[0]); But this requires downloading each message. Of course I can cache all the results, but this becomes a storage concern with large gmail boxes and also would be very slow (I can only imagine how long it would take to search through gigabytes of text...). So my question is, is there a way of searching through mail on the server, a la gmail's search field? Maybe through Microsoft Exchange? Hours of Googling has turned up nothing.

    Read the article

  • How to mult-thread this?

    - by WilliamKF
    I wish to have two threads. The first thread1 occasionally calls the following pseudo function: void waitForThread2() { if (thread2 is not idle) { return; } notifyThread2IamReady(); while (thread2IsExclusive) { } } The second thread2 is forever in the following pseudo loop: for (;;) { Notify thread1 I am idle. while (!thread1IsReady()) { } Notify thread1 I am exclusive. Do some work while thread1 is blocked. Notify thread1 I am busy. Do some work in parallel with thread1. } What is the best way to write this such that both thread1 and thread2 are kept as busy as possible on a machine with multiple cores. I would like to avoid long delays between notification in one thread and detection by the other. I tried using pthread condition variables but found the delay between thread2 doing 'notify thread1 I am busy' and the loop in waitForThread2() on thear2IsExclusive() can be up to almost one second delay. I then tried using a volatile sig_atomic_t shared variable to control the same, but something is going wrong, so I must not be doing it correctly.

    Read the article

  • Attachment_fu file saving problem

    - by Anand
    Attachment_fu plugin is kind of old, but I have to modify an old app and I can't use another plugin like paperclip etc. So here's the code without further ado Submissions table structure --------------------------- | content_type | varchar(255) | YES | | NULL | filename | varchar(255) | YES | | NULL app/models/submission.rb ------------------------ has_attachment :storage => :file_system, :path_prefix => 'public/submissions', :max_size => 2.megabytes, :content_type => ['application/pdf', 'application/msword', 'text/plain'] app/models/user.rb ------------------ has_one :submission, :dependent => :destroy app/views/user/some_action.html.erb ----------------------------------- <% form_for :user, :url => { :action => "some_action" }, :html => {:multipart => true} do |f| %> .... <%= file_field_tag "submission[uploaded_data]" %> <%end%> app/controllers/user_controller.rb ---------------------------------- @user = User.find_user(session[:user_id]) @submission = @user.submission if request.post? @submission.uploaded_data = params[:submission][:uploaded_data] end When the form is submitted, the database fields "content_type" and "filename" get updated and display the correct values, but the file does not appear in public/submissions/ directory. I have checked the permissions on the submissions directory. What am I missing? Many Thanks

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >