Search Results

Search found 1433 results on 58 pages for 'consistent'.

Page 44/58 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Emails sometimes get scrambled

    - by Alex
    Folks, I have a PHP-based based site (using the QCubed framework); as a part of the site, I have a daemon that's sending out several thousand emails a day (no i'm not a spammer, everything is opt-in :)). Emails are sent through a custom framework component; that component serves as an SMTP client. I'm using a paid SMTP gateway from DNSExit.com to get the emails actually delivered. Those emails are simple HTML-based emails; they really have just simple links inside. My issue is that these links sometimes (not consistently!) get scrambled during transition. Tags somehow get mixed up, and some links are non-functional in the email. The issue happens on a small percentage of all sent emails; it is not consistent (i.e. the same exact source message HTML may or may not cause the scrambling in transition). Have any of you seen this? Any thoughts on how to troubleshoot?

    Read the article

  • Can shared memory be read and validated without mutexes?

    - by Bribles
    On Linux I'm using shmget and shmat to setup a shared memory segment that one process will write to and one or more processes will read from. The data that is being shared is a few megabytes in size and when updated is completely rewritten; it's never partially updated. I have my shared memory segment laid out as follows: ------------------------- | t0 | actual data | t1 | ------------------------- where t0 and t1 are copies of the time when the writer began its update (with enough precision such that successive updates are guaranteed to have differing times). The writer first writes to t1, then copies in the data, then writes to t0. The reader on the other hand reads t0, then the data, then t1. If the reader gets the same value for t0 and t1 then it considers the data consistent and valid, if not, it tries again. Does this procedure ensure that if the reader thinks the data is valid then it actually is? Do I need to worry about out-of-order execution (OOE)? If so, would the reader using memcpy to get the entire shared memory segment overcome the OOE issues on the reader side? (This assumes that memcpy performs it's copy linearly and ascending through the address space. Is that assumption valid?)

    Read the article

  • Should a Trim method generally in the Data Access Layer or with in the Domain Layer?

    - by jpierson
    I'm dealing with a database that contains data with inconsistencies such as white leading and trailing white space. In general I see a lot of developers practice defensive coding by trimming almost all strings that come from the database that may have been entered by a user at some point. In my oppinoin it is better to do such formating before data is persisted so that it is done only once and then the data can be in a consistent and reliable state. Unfortunatley this is not the case however which leads me to the next best solution, using a Trim method. If I trim all data as part of my data access layer then I don't have to concern myself with defensive trimming within the business objects of my domain layer. If I instead put the trimming responsibility in my business objects, such as with set accessors of my C# properties, I should get the same net results however the trim will be operating on all values assigned to my business objects properties not just the ones that come from the inconsistent database. I guess as a somewhat philisophical question that may determine the answer I could ask "Should the domain later be responsible for defensive/coercive formatting of data?" Would it make sense to have a set accessor for a PhoneNumber property on a business object accept a unformatted or formatted string and then attempt to format it as required or should this responsibility be pushed to the presentation and data access layers leaving the domain layer more strict in the type of data that it will accept? I think this may be the more fundamental question. Update: Below are a few links that I thought I should share about the topic of data cleansing. Information service patterns, Part 3: Data cleansing pattern LINQ to SQL - Format a string before saving? How to trim values using Linq to Sql?

    Read the article

  • Analyzing Web Application Speed

    - by Amy
    I'm a bit confused because the logical/programmer brain in me says that if all things are constant, the speed of a function must be constant. I am working on a PHP web application with jqGrid as a front end for showing the data. I am testing on my personal computer, so network traffic does not apply. I make an HTTP request to a PHP function, it returns the data, and then jqGrid renders it. What has me befuddled is that sometimes, Firebug reports that this is taking 300-600 milliseconds sometimes, and sometimes, it's taking 3.68 seconds. I can run the request over and over again, with very radically different response times. The query is the same. The number of users on the system is the same. No network latency. Same code. I'm not running other applications on the computer while testing. I could understand query caching improving performance on subsequent requests, but the speed is just fluctuating wildly with no rhyme or reason. So, my question is, what else can cause such variability in the response time? How can I determine what's doing it? More importantly, is there any way to get things more consistent?

    Read the article

  • Heterogeneous queries require the ANSI_NULLS

    - by Dezigo
    I wrote a trigger. USE [TEST] GO /****** Object: Trigger [dbo].[TR_POSTGRESQL_UPDATE_YC] Script Date: 05/26/2010 08:54:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dbo].[TR_POSTGRESQL_UPDATE_YC] ON [dbo].[BCT_CNTR_EVENTS] FOR INSERT AS BEGIN DECLARE @MOVE_TIME varchar(14); DECLARE @MOVE_TIME_FORMATED varchar(20); DECLARE @RELEASE_NOTE varchar(32); DECLARE @CMR_NUMBER varchar(15); DECLARE @MOVE_TYPE varchar(2); SELECT @MOVE_TIME = inserted.move_time ,@MOVE_TYPE = inserted.move_type ,@RELEASE_NOTE = inserted.release_note ,@CMR_NUMBER = inserted.cmr_number FROM inserted IF(@MOVE_TYPE = 'YC') BEGIN SET @MOVE_TIME_FORMATED = SUBSTRING(@MOVE_TIME,1,4) + '-' + SUBSTRING(@MOVE_TIME,5,2) + '-' + SUBSTRING(@MOVE_TIME,7,2) + ' 00:00:00' --UPDATE OpenQuery(POSTGRESQL_SERV,'SELECT visit_cmr,visit_timestamp,visit_pin FROM VISIT') -- SET visit_cmr = @RELEASE_NOTE -- WHERE visit_timestamp = @MOVE_TIME_FORMATED -- AND visit_pin = right(@CMR_NUMBER,5) -- AND visit_cmr IS NULL END SET NOCOUNT ON; END When I have inserted a row,I have get an error **Heterogeneous queries require the ANSI_NULLS and ANSI_WARNINGS options to be set for the connection. This ensures consistent query semantics. Enable these options and then reissue your query.** Then I ofcourse SET SET ANSI_WARNINGS is ON but it`s not work for me. (trigger fo linked server postgresql) I have restarted a server. not work again.

    Read the article

  • Visual Studio crashes consistently on web-related projects

    - by Traveling Tech Guy
    Hi, I have a brand new VS2010 installed on a Win2008R2 machine. I started getting this error when debugging a WCF service project: "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." When I started developing a web site a week later, this became consistent - I can't debug it. The stack dump reads: at Microsoft.VisualStudio.WebHost.Host.ProcessRequest(Connection conn) at Microsoft.VisualStudio.WebHost.Server.OnSocketAccept(Object acceptedSocket) at System.Threading.QueueUserWorkItemCallback.WaitCallback_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean ignoreSyncCtx) at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() at System.Threading.ThreadPoolWorkQueue.Dispatch() at System.Threading._ThreadPoolWaitCallback.PerformWaitCallback() I tried searching online, and some recommend turning off the "Suppress JIT Optimizations" in the Debugging options - this dos not seem to make a difference. Clearly the problem is with the built in web server. But am I doing something wrong? Is there something I can do? Or is this a known bug? Thanks for your time, Guy Update 12/31: Today I tried using CassiniDev as a replacement to the original VS2010 WebServer - exact same result. My suspicion is that there's some internal conflict between VS2010, Windows Server 2008R2 and maybe the fact that it's a 64 bit OS. I switched to using IIS as my debug server - and that seems to work, with some annoying side effects. My conclusion: do not use a 64 bit server system as your dev machine. Develop on 32bit - deploy to 64bit. Side conclusion: there are some scenarios Microsoft's QA doesn't test.

    Read the article

  • RESTful copy/move operations?

    - by ladenedge
    I am trying to design a RESTful filesystem-like service, and copy/move operations are causing me some trouble. First of all, uploading a new file is done using a PUT to the file's ultimate URL: PUT /folders/42/contents/<name> The question is, what if the new file already resides on the system under a different URL? Copy/move Idea 1: PUTs with custom headers. This is similar to S3's copy. A PUT that looks the same as the upload, but with a custom header: PUT /folders/42/contents/<name> X-custom-source: /files/5 This is nice because it's easy to change the file's name at copy/move time. However, S3 doesn't offer a move operation, perhaps because a move using this scheme won't be idempotent. Copy/move Idea 2: POST to parent folder. This is similar to the Google Docs copy. A POST to the destination folder with XML content describing the source file: POST /folders/42/contents ... <source>/files/5</source> <newName>foo</newName> I might be able to POST to the file's new URL to change its name..? Otherwise I'm stuck with specifying a new name in the XML content, which amplifies the RPCness of this idea. It's also not as consistent with the upload operation as idea 1. Ultimately I'm looking for something that's easy to use and understand, so in addition to criticism of the above, new ideas are certainly welcome!

    Read the article

  • WinForms - DateTimePicker default month selection behavior for Server 2003 vs Server 2008?

    - by Mike Loux
    Good Afternoon! Has anybody else noticed a change in the default behavior of the "next" and "previous" month arrows in the standard WinForms DateTimePicker control? I have users running on both Windows Server 2003 and Windows Server 2008 R2, and they are reporting that on 2008 (and Vista/Win7), clicking the right or left arrows on the drop-down Calendar now selects the first day of the month rather than retaining the same day like 2003 (and XP) does. I have checked this out (I have a Win7 machine) and I have confirmed this behavior. I would prefer that the behavior remain consistent whenever possible. Does anybody know what causes this and if there is a way to get around this? Is there a way to trap the arrow-click event and force the resulting date to retain the original day rather than be reset to the first of the month? I thought about seeing if there was a way to hit-test the control on a MouseUp event and determine if the arrow buttons were clicked, and then override the month value being set, but I'm not sure if that is even possible. Can anybody provide some wisdom or insight? Thanks!

    Read the article

  • How can I have sub-elements of a complex/mixed type with unrestricted order and count?

    - by mbmcavoy
    I am working with XML where some elements will contain text with additional markup. This is similar to this example at W3Schools. However, I need the markup tags to be able to appear in any order and possibly more than once. To modify their example for illustration: <letter> Dear Mr.<name>John Smith</name>. Your order <orderid>1032</orderid> will be shipped on <shipdate>2001-07-13</shipdate>. Thank you, <name>Bob Adams</name> </letter> None of the options presented by W3Schools (on the page following the linked example) allow this XML due to the second <name> element. Their explanation of the "indicators" and my testing are consistent. <xs:sequence> - violates the element order <xs:choice> - more than one kind of element is used. <xs:all> - maxOccurs is restricted to "1". This seems like it should be basic, after all, XHTML allows such things. How do I define my schema to allow this?

    Read the article

  • Django repeating vars/cache issue?

    - by Mark
    I'm trying to build a better/more powerful form class for Django. It's working well, except for these sub-forms. Actually, it works perfectly right after I re-start apache, but after I refresh the page a few times, my HTML output starts to look like this: <input class="text" type="text" id="pickup_addr-pickup_addr-pickup_addr-id-pickup_addr-venue" value="" name="pickup_addr-pickup_addr-pickup_addr-pickup_addr-venue" /> The pickup_addr- part starts repeating many times. I was looking for loops around the prefix code that might have cause this to happen, but the output isn't even consistent when I refresh the page, so I think something is getting cached somewhere, but I can't even imagine how that's possible. The prefix car should be reset when the class is initialized, no? Unless it's somehow not initializing something? class Form(object): count = 0 def __init__(self, data={}, prefix='', action='', id=None, multiple=False): self.fields = {} self.subforms = {} self.data = {} self.action = action self.id = fnn(id, 'form%d' % Form.count) self.errors = [] self.valid = True if not empty(prefix) and prefix[-1:] not in ('-','_'): prefix += '-' for name, field in inspect.getmembers(self, lambda m: isinstance(m, Field)): if name[:2] == '__': continue field_name = fnn(field.name, name) field.label = fnn(field.label, humanize(field_name)) field.name = field.widget.name = prefix + field_name + ife(multiple, '[]') field.id = field.auto_id = field.widget.id = ife(field.id==None, 'id-') + prefix + fnn(field.id, field_name) + ife(multiple, Form.count) field.errors = [] val = fnn(field.widget.get_value(data), field.default) if isinstance(val, basestring): try: val = field.coerce(field.format(val)) except Exception, err: self.valid = False field.errors.append(escape_html(err)) field.val = self.data[name] = field.widget.val = val for rule in field.rules: rule.fields = self.fields rule.val = field.val rule.name = field.name self.fields[name] = field for name, form in inspect.getmembers(self, lambda m: ispropersubclass(m, Form)): if name[:2] == '__': continue self.subforms[name] = self.__dict__[name] = form(data=data, prefix='%s%s-' % (prefix, name)) Form.count += 1 Let me know if you need more code... I know it's a lot, but I just can't figure out what's causing this!

    Read the article

  • Can I specify default value?

    - by atch
    Why is it that for user defined types when creating an array of objects every element of this array is initialized with default ctor but when I create built-in type this isn't the case? And second question: is it possible to specify default value to be used while initialize? Something like this (not valid): char* p = new char[size]('\0'); And another question in this topic while I'm with arrays. I suppose that when creating an array of user defined type and knowing the fact that every elem. of this array will be initialized with default value firstly why? If arrays for built in types do not initialize their elems. with their dflts why do they do it for UDT, and secondly: is there a way to switch it off/avoid/circumvent somehow? It seems like bit of a waste if I for example have created an array with size 10000 and then 10000 times dflt ctor will be invoked and I will (later on) overwrite this values anyway. I think that behaviour should be consistent, so either every type of array should be initialized or none. And I think that the behaviour for built-in arrays is more appropriate.

    Read the article

  • Empty Postbacks on ASP.NET pages

    - by AaronLS
    We are having a problem that seems to only be a problem when accessing our websites from internal intranet machines. When logged into the domain, and accessing our websites, postbacks are not working. Basically the page behaves as if it were refreshed and nothing was changed. When logging the GETs and POSTs with an HTTP analyzer, the post is complete empty and the ContentLength is 0. It is also very sporadic, but seems to be happening fairly often. In the case where it failed, we could see that there was an extra item in the Header for the POST, it was "Authorization" and the value was the word "Negotiate " followed by a space and then a bunch of characters and two equal symbols at the end, which looked like some kind of base64 encoded value. In a case where it succeeded, this Authorization item was no in the header, but I have logged more than one successful cases to know if that is consistent. We have seen this occur only with IE8 so far, and when it occurs it is sometimes sporadic. I can close and open the browser and it will begin working sometimes, and other times it is still broken. What might be causing the postback to be empty? This means the viewstate is not sent to the server which makes the page basically broken. It seems to certainly be a client side issue, but not sure if it's not aggravated by some server settings. Thanks in advance.

    Read the article

  • MATLAB setting matrix values in an array

    - by user324994
    I'm trying to write some code to calculate a cumulative distribution function in matlab. When I try to actually put my results into an array it yells at me. tempnum = ordered1(1); k=2; while(k<538) count = 1; while(ordered1(k)==tempnum) count = count + 1; k = k + 1; end if(ordered1(k)~=tempnum) output = [output;[(count/537),tempnum]]; k = k + 1; tempnum = ordered1(k); end end The errors I'm getting look like this ??? Error using ==> vertcat CAT arguments dimensions are not consistent. Error in ==> lab8 at 1164 output = [output;[(count/537),tempnum]]; The line to add to the output matrice was given to me by my TA. He didn't teach us much syntax throughout the year so I'm not really sure what I'm doing wrong. Any help is greatly appreciated.

    Read the article

  • What's the recommended implementation for hashing OLE Variants?

    - by Barry Kelly
    OLE Variants, as used by older versions of Visual Basic and pervasively in COM Automation, can store lots of different types: basic types like integers and floats, more complicated types like strings and arrays, and all the way up to IDispatch implementations and pointers in the form of ByRef variants. Variants are also weakly typed: they convert the value to another type without warning depending on which operator you apply and what the current types are of the values passed to the operator. For example, comparing two variants, one containing the integer 1 and another containing the string "1", for equality will return True. So assuming that I'm working with variants at the underlying data level (e.g. VARIANT in C++ or TVarData in Delphi - i.e. the big union of different possible values), how should I hash variants consistently so that they obey the right rules? Rules: Variants that hash unequally should compare as unequal, both in sorting and direct equality Variants that compare as equal for both sorting and direct equality should hash as equal It's OK if I have to use different sorting and direct comparison rules in order to make the hashing fit. The way I'm currently working is I'm normalizing the variants to strings (if they fit), and treating them as strings, otherwise I'm working with the variant data as if it was an opaque blob, and hashing and comparing its raw bytes. That has some limitations, of course: numbers 1..10 sort as [1, 10, 2, ... 9] etc. This is mildly annoying, but it is consistent and it is very little work. However, I do wonder if there is an accepted practice for this problem.

    Read the article

  • Inconsistent table width when hideing/showing a set of columns

    - by Salman A. Kagzi
    I have got a an HTML table of around 40+ columns. To make this table fit in the screen and have the data in a re presentable format we have section in this table. i.e. there are some column that are always visible and the remainder a made visible when s specific radio button (describing a section) is selected. Each radio button is associated to different number of columns. We show/hide a column by setting/removing "display:none" style in all the cell under that column. This all works Just fine. Now the real problem is with the width of the columns in this table. I cant use fixed with with pixel settings. I have tried using the percentage setting by giving 50% to the always visible part and rest 50% is divided between the column in a section. But I am unable to get a fixed behavior i.e. the size of the table columns across IE & FF. Some columns are just right while some are really huge. How can I get the table to give consistent column width across browsers?

    Read the article

  • What function does .NET NPV() use? Doesn't match manual calculations

    - by Matthew PK
    I am using the NPV() function in VB.NET to get NPV for a set of cash flows. However, the result of NPV() is not consistent with my results performing the calculation manually (nor the Investopedia NPV calc... which matches my manual results) My correct manual results and the NPV() results are close, within 5%.. but not the same... Manually, using the NPV formula: NPV = C0 + C1/(1+r)^1 + C2/(1+r)^2 + C3/(1+r)^3 + .... + Cn/(1+r)^n The manual result is stored in RunningTotal With rate r = 0.04 and period n = 10 Here is my relevant code: EDIT: Do I have OBOB somewhere? YearCashOutFlow = CDbl(TxtAnnualCashOut.Text) YearCashInFlow = CDbl(TxtTotalCostSave.Text) YearCount = 1 PAmount = -1 * (CDbl(TxtPartsCost.Text) + CDbl(TxtInstallCost.Text)) RunningTotal = PAmount YearNPValue = PAmount AnnualRateIncrease = CDbl(TxtUtilRateInc.Text) While AnnualRateIncrease > 1 AnnualRateIncrease = AnnualRateIncrease / 100 End While AnnualRateIncrease = 1 + AnnualRateIncrease ' ZERO YEAR ENTRIES ListBoxNPV.Items.Add(Format(PAmount, "currency")) ListBoxCostSave.Items.Add("$0.00") ListBoxIRR.Items.Add("-100") ListBoxNPVCum.Items.Add(Format(PAmount, "currency")) CashFlows(0) = PAmount '''' Do While YearCount <= CInt(TxtLifeOfProject.Text) ReDim Preserve CashFlows(YearCount) CashFlows(YearCount) = Math.Round(YearCashInFlow - YearCashOutFlow, 2) If CashFlows(YearCount) > 0 Then OnePos = True YearNPValue = CashFlows(YearCount) / (1 + DiscountRate) ^ YearCount RunningTotal = RunningTotal + YearNPValue ListBoxNPVCum.Items.Add(Format(Math.Round(RunningTotal, 2), "currency")) ListBoxCostSave.Items.Add(Format(YearCashInFlow, "currency")) If OnePos Then ListBoxIRR.Items.Add((IRR(CashFlows, 0.1)).ToString) ListBoxNPV.Items.Add(Format(NPV(DiscountRate, CashFlows), "currency")) Else ListBoxIRR.Items.Add("-100") ListBoxNPV.Items.Add(Format(RunningTotal, "currency")) End If YearCount = YearCount + 1 YearCashInFlow = AnnualRateIncrease * YearCashInFlow Loop

    Read the article

  • Problems with overriding OnPaint and grabbing mouse events in C# UserControl containing other controls

    - by MoreThanChaos
    I've made a control which contains few other controls like PictureBox, Label and TextBox. But I'm having two problems: 1. I tried to paint some things on top of my control and when I'm overriding OnPaint, it results that things I try to draw are under controls that my control contains. Areas on which I would like to draw intersect with controls in ways that are not easy to predict. I mean that it includes something drawn on controls inside as well as on base of control. Is there a simple way to draw something on top of all contents of my control? Setting ForeColor to Transparent isn't a solution that would help me much. 2. I have a problem with grabbing mouse click events when I place my control on a form and add click event handling. It only works when I click on an area not occupied by controls inside. I would like the whole control to react to clicks and other actions like it was one consistent control. How can I redirect/handle these clicks to make them work the way I want? Thanks in advance for any tips

    Read the article

  • IE 7 activex object (or xmlhttprequest?) open method using POST takes 20-30 seconds to return

    - by Toddeman
    i have a problem that only shows itself in IE7. its a simple ajax call. i got my object (accounting for the browser) so in 7 i SHOULD have an ActiveXObject. when i call open with POST, it takes 20-30 seconds to return. i am using a TON of GET calls to populate information and all of these work (finally, after some bug fixing), but i am NOT a web developer so much like the other bugs i had to fix, i figured i was just missing another IE anomaly. this is not a consistent bug either, which makes it harder to find for me. most times the POST functions like it does in Firefox or Chrome, but maybe 1 out of 4 or 5 will take 20-30 seconds to return. it DOES return correctly when it returns, it just takes a long time. am i missing something simple? or is there a smarter way for me to find out exactly what is going on (like the equivalent of the firebug 'net' tab for windows?).

    Read the article

  • Is there a generic way of dealing with varying connection strings in C#?

    - by James Wiseman
    I have an application that needs to connect to a SQL database, and execute a SQL Agent Job. The connection string I am trying to access is stored in the registry, which is easily enough pulled out. This appliction is to be run on multiple computers, and I cannot guarantee the format of this connection string being consistent across these computers. Two that I have pulled out for example are: Data Source=Server1;Initial Catalog=DB1;Integrated Security=SSPI; Data Source=Server2;Initial Catalog=DB1;Provider=SQLNCLI.1;Integrated Security=SSPI;Auto Translate=False; I can use an object of type System.Data.SqlClient.SqlConnection to connect to the database with the first connection string, howevever, I get the following error when I pass the second to it: keyword not supported: 'provider' Similarly, I can use the an object of type System.Data.OleDb.OleDbConnection to connect to the database with the second connection string, howevever, I get the following error when I pass the first to it: An OLEDB Provider was not specified in the ConnectionString' I can solve this by scanning the string for 'Provider' and doing the connect conditionally, however I can't help but feel that there is a better way of doing this, and handle the connection strings in a more generic fashion. Does anyone have any suggestions?

    Read the article

  • Django: How can I identify the calling view from a template?

    - by bryan
    Short version: Is there a simple, built-in way to identify the calling view in a Django template, without passing extra context variables? Long (original) version: One of my Django apps has several different views, each with its own named URL pattern, that all render the same template. There's a very small amount of template code that needs to change depending on the called view, too small to be worth the overhead of setting up separate templates for each view, so ideally I need to find a way to identify the calling view in the template. I've tried setting up the views to pass in extra context variables (e.g. "view_name") to identify the calling view, and I've also tried using {% ifequal request.path "/some/path/" %} comparisons, but neither of these solutions seems particularly elegant. Is there a better way to identify the calling view from the template? Is there a way to access to the view's name, or the name of the URL pattern? Update 1: Regarding the comment that this is simply a case of me misunderstanding MVC, I understand MVC, but Django's not really an MVC framework. I believe the way my app is set up is consistent with Django's take on MVC: the views describe which data is presented, and the templates describe how the data is presented. It just happens that I have a number of views that prepare different data, but that all use the same template because the data is presented the same way for all the views. I'm just looking for a simple way to identify the calling view from the template, if this exists. Update 2: Thanks for all the answers. I think the question is being overthought -- as mentioned in my original question, I've already considered and tried all of the suggested solutions -- so I've distilled it down to a "short version" now at the top of the question. And right now it seems that if someone were to simply post "No", it'd be the most correct answer :) Update 3: Carl Meyer posted "No" :) Thanks again, everyone.

    Read the article

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • Random-access archive for Unix use

    - by tylerl
    I'm looking for a good format for archiving entire file-systems of old Linux computers. TAR.GZ The tar.gz format is great for archiving files with UNIX-style attributes, but since the compression is applied across the entire archive, the design precludes random-access. Instead, if you want to access a file at the end of the archive, you have to start at the beginning and decompress the whole file (which could be several hundred GB) up to the point where you find the entry you're looking for. ZIP Conversely, one selling point of the ZIP format is that it stores an index of the archive: filenames are stored separately with pointers to the location within the archive were to find the data. If I want to extract a file at the end, I look up the position of that file by name, seek to the location, and extract the data. However, it doesn't store file attributes such as ownership, permissions, symbolic links, etc. Other options? I've tried using squashfs, but it's not really designed for this purpose. The file format is not consistent between versions, and building the archive takes a lot of time and space. What other options might suit this purpose better?

    Read the article

  • Problems using (building?) native gem extensions on OS X

    - by goodmike
    I am having trouble with some of my rubygems, in particular those that use native extensions. I am on a MacBookPro, with Snow Leopard. I have XCode 3.2.1 installed, with gcc 4.2.1. Ruby 1.8.6, because I'm lazy and a scaredy cat and don't want to upgrade yet. Ruby is running in 32-bit mode. I built this ruby from scratch when my MBP ran OSX 10.4. When I require one of the affected gems in irb, I get a Load Error for the gem extension's bundle file. For example, here's nokogigi dissing me: > require 'rubygems' = true > require 'nokogiri' LoadError: Failed to load /usr/local/lib/ruby/gems/1.8/gems/nokogiri-1.4.1/lib/nokogiri/nokogiri.bundle This is also happening with the Postgres pg and MongoDB mongo gems. My first thought was that the extensions must not be building right. But gem install wasn't throwing any errors. So I reinstalled with the verbose flag, hoping to see some helpful warnings. I've put the output in a Pastie, and the only warning I see is a consistent one about "passing argument n of ‘foo’ with different width due to prototype." I suspect that this might be an issue from upgrading to Snow Leopard, but I'm a little surprised to experience it now, since I've updated my XCode. Could it stem from running Ruby in 1.8.6? I'm embarrassed that I don't know quite enough about my Mac and OSX to know where to look next, so any guidance, even just a pointer to some document I couldn't find via Google, would be most welcome. Michael

    Read the article

  • PHP script stops running arbitrarily with no errors.

    - by RobHardgood
    I was working on a fairly long script that seemed to stop running after about 20 minutes. After days of trying to figure out why, I decided to make a very simple script to see how long it would run without any complex code to confuse me. I found that the same thing was happening with this simple infinite loop. At some point between 15 and 25 minutes of running, it simply stopped. There is no error output at all. The bottom of the browser simply says "Done" and nothing else is processed. I've been over every single possible thing I could think of... set_time_limit, session.gc_maxlifetime in the php.ini as well as memory_limit and max_execution_time. The point that the script is stopped is never consistent. Sometimes it will stop at 15 minutes, sometimes 22 minutes, sometimes 17... But it's always long enough to be a PITA to test. Please, any help would be greatly appreciated. It is hosted on a 1and1 server. I contacted them and they couldn't help me... but I have a feeling they just didn't know enough.

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >