Search Results

Search found 2798 results on 112 pages for 'pull'.

Page 46/112 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • Server Design Enquiry [closed]

    - by Brandon Gelfand
    I am really new to this and am having trouble with how I can store information from a website. I am going to be making a site similar to dropbox for my company and there is gonna be around 40 TB of space that we need to be able to access on the go, for instance pull the file up on the phone or off our laptop using the internet. Obviously using an Amazon S3 server is not the most cost effective way of doing this so how can I make my own server to hold all of the info and communicate to it from a website? Please I need as much help as possible, like a blueprint or something. Thanks and sorry for the noobish question but I can't really find an exact answer to my question.

    Read the article

  • Two way replication

    - by Nidzaaaa
    I have a little problem... I have this case: -2 server instances -2 Databases -1 Table (5 columns) From server 1 I created publication to replicate all columns of table I have in 1. DB From server 2 I created subscription to pull all columns from table which is in server 1 DB But now, I need to publicate one columns of same table from server 2 to server 1 and also it has to be in same DB... I tried with using logic and creating publication for server 2 and subscription on server 1 but there is error appearing "You have selected the Publisher as a Subscriber and entered a subscription database that is the same as the publishing database. Select another subscription database." I hope someone understood my problem and have an answer for me, thanks in advance... p.s. Ask for more info if you need ...

    Read the article

  • Is there a good way to prevent a server from emailing a specific address (we control both servers/apps)?

    - by Bms85smb
    When testing a production app we occasionally need to pull from a live site and perform tests on a development server. There are quite a few email addresses stored in the database that we need to modify every time we restore to the development server. Occasionally someone on my team will miss one and accidentally send an email through the distribution list. The email looks legit because it is coming from a clone, it can cause quite the situation. We have a protocol we follow every time we clone the live app and it has helped a lot but I would feel better if it was impossible for the two servers to communicate. Is there a good way to do this? Can firewall rules block email? Does Plesk have a blacklist?

    Read the article

  • Playbook is starting to get mail all of a sudden [closed]

    - by 1.21 gigawatts
    After 6 months my Playbook is just now this morning starting to pull in emails from my GMail account. Why? I didn't change anything on my end. The only difference I can think of (besides something changing on RIMs services end if that has any part in this) may be that I've been on a secure university connection for a few hours. I've been on this network before but the playbook doesn't seem to stay connected to it very long. Maybe it's because I have a lot of emails in my account? Has anyone else noticed this?

    Read the article

  • Python unicode Decode Error SUDs

    - by PylonsN00b
    OK so I have # -*- coding: utf-8 -*- at the top of my script and it worked for being able to pull data from the database that had funny chars(Ñ ,Õ,é,—,–,’,…) in it and store that data into variables...but I have run into other problems, see I pull my data, organize it, and then dump it into a variables like so: title = product[1] Where product[1] is from my database result set Then I load it up for Suds like so: array_of_inventory_item_submit = ca_client_inventory.factory.create('ArrayOfInventoryItemSubmit') for product in products: inventory_item_submit = ca_client_inventory.factory.create('InventoryItemSubmit') inventory_item_list = get_item_list(product) inventory_item_submit = [inventory_item_list] array_of_inventory_item_submit.InventoryItemSubmit.append(inventory_item_submit) #Call that service baby! ca_client_inventory.service.SynchInventoryItemList(accountID, array_of_inventory_item_submit) Where get_item_list sets product[1] to title and (including a whole bunch of other nodes): inventory_item_submit.Title = title So everything runs fine until I call ca_client_inventory.service.SynchInventoryItemList that contains array_of_inventory_item_submit which contains the title w/ the funky char...here is the error: Traceback (most recent call last): File "upload_all_inventory_ebay.py", line 421, in <module> ca_client_inventory.service.SynchInventoryItemList(accountID, array_of_inventory_item_submit) File "build/bdist.macosx-10.6-i386/egg/suds/client.py", line 539, in __call__ File "build/bdist.macosx-10.6-i386/egg/suds/client.py", line 592, in invoke File "build/bdist.macosx-10.6-i386/egg/suds/bindings/binding.py", line 118, in get_message File "build/bdist.macosx-10.6-i386/egg/suds/bindings/document.py", line 63, in bodycontent File "build/bdist.macosx-10.6-i386/egg/suds/bindings/document.py", line 105, in mkparam File "build/bdist.macosx-10.6-i386/egg/suds/bindings/binding.py", line 260, in mkparam File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 62, in process File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 243, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 298, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 298, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 243, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 182, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/core.py", line 75, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 102, in append File "build/bdist.macosx-10.6-i386/egg/suds/mx/appender.py", line 198, in append File "build/bdist.macosx-10.6-i386/egg/suds/sax/element.py", line 251, in setText File "build/bdist.macosx-10.6-i386/egg/suds/sax/text.py", line 43, in __new__ UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 116: ordinal not in range(128) Now what? My guess is my script can take in these funky chars because I have # -*- coding: utf-8 -*- at the top but Suds does NOT have that at the top of its files. Do I really want to go and change the Suds files...we all know this is the least desired last possible solution...what can I do?

    Read the article

  • Git for beginners: The definitive practical guide

    - by Adam Davis
    Ok, after seeing this post by PJ Hyett, I have decided to skip to the end and go with git. So what I need is a beginners practical guide to git. "Beginner" being defined as someone who knows how to handle their compiler, understands to some level what a makefile is, and has touched source control without understanding it very well. "Practical" being defined as this person doesn't want to get into great detail regarding what git is doing in the background, and doesn't even care (or know) that it's distributed. Your answers might hint at the possibilities, but try to aim for the beginner that wants to keep a 'main' repository on a 'server' which is backed up and secure, and treat their local repository as merely a 'client' resource. Procedural note: PLEASE pick one and only one of the below topics and answer it clearly and concisely in any given answer. Don't try to jam a bunch of information into one answer. Don't just link to other resources - cut and paste with attribution if copyright allows, otherwise learn it and explain it in your own words (ie, don't make people leave this page to learn a task). Please comment on, or edit, an already existing answer unless your explanation is very different and you think the community is better served with a different explanation rather than altering the existing explanation. So: Installation/Setup How to install git How do you set up git? Try to cover linux, windows, mac, think 'client/server' mindset. Setup GIT Server with Msysgit on Windows How do you create a new project/repository? How do you configure it to ignore files (.obj, .user, etc) that are not really part of the codebase? Working with the code How do you get the latest code? How do you check out code? How do you commit changes? How do you see what's uncommitted, or the status of your current codebase? How do you destroy unwanted commits? How do you compare two revisions of a file, or your current file and a previous revision? How do you see the history of revisions to a file? How do you handle binary files (visio docs, for instance, or compiler environments)? How do you merge files changed at the "same time"? How do you undo (revert or reset) a commit? Tagging, branching, releases, baselines How do you 'mark' 'tag' or 'release' a particular set of revisions for a particular set of files so you can always pull that one later? How do you pull a particular 'release'? How do you branch? How do you merge branches? How do you resolve conflicts and complete the merge? How do you merge parts of one branch into another branch? What is rebasing? How do I track remote branches? How can I create a branch on a remote repository? Other Describe and link to a good gui, IDE plugin, etc that makes git a non-command line resource, but please list its limitations as well as its good. msysgit - Cross platform, included with git gitk - Cross platform history viewer, included with git gitnub - OS X gitx - OS X history viewer smartgit - Cross platform, commercial, beta tig - console GUI for Linux qgit - GUI for Windows, Linux Any other common tasks a beginner should know? Git Status tells you what you just did, what branch you have, and other useful information How do I work effectively with a subversion repository set as my source control source? Other git beginner's references git guide git book git magic gitcasts github guides git tutorial Progit - book by Scott Chacon Git - SVN Crash Course Delving into git Understanding git conceptually I will go through the entries from time to time and 'tidy' them up so they have a consistent look/feel and it's easy to scan the list - feel free to follow a simple "header - brief explanation - list of instructions - gotchas and extra info" template. I'll also link to the entries from the bullet list above so it's easy to find them later.

    Read the article

  • What git branching models actually work - the final question

    - by UncleCJ
    In our company we have successfully deployed git and we are currently using a simple trunk/release/hotfixes branching model. However, this has it's problems, I have some key issues of confusion in the community which would be awesome to have answered here. Maybe my hopes for an Alexander stroke are too great, quite possibly I'll decompose this question into more manageable issues, but here's my first shot. Workflows / branching models - below are the three main descriptions of this I have seen, but they are partially contradicting each other or don't go far enough to sort out the subsequent issues we've run into (as described below). Thus our team so far defaults to not so great solutions. Are you doing something better? gitworkflows(7) Manual Page (nvie) A successful Git branching model (reinh) A Git Workflow for Agile Teams Merging vs rebasing (tangled vs sequential history) - the bids on this are as confusing as it gets. Should one pull --rebase or wait with merging back to the mainline until your task is finished? Personally I lean towards merging since this preserves a visual illustration of on which base a task was started and finished, and I even prefer merge --no-ff for this purpose. It has other drawbacks however. Also many haven't realized the useful property of merging - that it isn't commutative (merging a topic branch into master does not mean merging master into the topic branch). I am looking for a natural workflow - sometimes mistakes happen because our procedures don't capture a specific situation with simple rules. For example a fix needed for earlier releases should of course be based sufficiently downstream to be possible to merge upstream into all branches necessary (is the usage of these terms clear enough?). However it happens that a fix makes it into the master before the developer realizes it should have been placed further downstream, and if that is already pushed (even worse, merged or something based on it) then the option remaining is cherry-picking, with it's associated perils... What simple rules like such do you use? Also in this is included the awkwardness of one topic branch necessarily excluding other topic branches (assuming they are branched from a common baseline). Developers don't want to finish a feature to start another one feeling like the code they just wrote is not there anymore How to avoid creating merge conflicts (due to cherry-pick)? What seems like a sure way to create a merge conflict is to cherry-pick between branches, they can never be merged again? Would applying the same commit in revert (how to do this?) in either branch possibly solve this situation? This is one reason I do not dare to push for a largely merge-based workflow. How to decompose into topical branches? - We realize that it would be awesome to assemble a finished integration from topic branches, but often work by our developers is not clearly defined (sometimes as simple as "poking around") and if some code has already gone into a "misc" topic, it can not be taken out of there again, according to the question above? How do you work with defining/approving/graduating/releasing your topic branches? Proper procedures like code review and graduating would of course be lovely, but we simply cannot keep things untangled enough to manage this - any suggestions? integration branches, illustration please? Vote and comment as much as you'd like, I'll try to keep the issue page clear and informative enough. Thanks! Below is a list of related topics on stackoverflow I have checked out: What are some good strategies to allow deployed applications to be hotfixable? Workflow description for git usage for in-house development Git workflow for corporate Linux kernel development How do you maintain development code and production code? (thanks for this PDF!) git releases management Git Cherry-pick vs Merge Workflow How to cherry-pick multiple commits How do you merge selective files with git-merge? How to cherry pick a range of commits and merge into another branch ReinH Git Workflow git workflow for making modifications you’ll never push back to origin Cherry-pick a merge Proper Git workflow for combined OS and Private code? Maintaining Project with Git Why cant Git merge file changes with a modified parent/master. Git branching / rebasing good practices When will "git pull --rebase" get me in to trouble?

    Read the article

  • Is SHA-1 secure for password storage?

    - by Tgr
    Some people throw around remarks like "SHA-1 is broken" a lot, so I'm trying to understand what exactly that means. Let's assume I have a database of SHA-1 password hashes, and an attacker whith a state of the art SHA-1 breaking algorithm and a botnet with 100,000 machines gets access to it. (Having control over 100k home computers would mean they can do about 10^15 operations per second.) How much time would they need to find out the password of any one user? find out the password of a given user? find out the password of all users? find a way to log in as one of the users? find a way to log in as a specific user? How does that change if the passwords are salted? Does the method of salting (prefix, postfix, both, or something more complicated like xor-ing) matter? Here is my current understanding, after some googling. Please correct in the answers if I misunderstood something. If there is no salt, a rainbow attack will immediately find all passwords (except extremely long ones). If there is a sufficiently long random salt, the most effective way to find out the passwords is a brute force or dictionary attack. Neither collision nor preimage attacks are any help in finding out the actual password, so cryptographic attacks against SHA-1 are no help here. It doesn't even matter much what algorithm is used - one could even use MD5 or MD4 and the passwords would be just as safe (there is a slight difference because computing a SHA-1 hash is slower). To evaluate how safe "just as safe" is, let's assume that a single sha1 run takes 1000 operations and passwords contain uppercase, lowercase and digits (that is, 60 characters). That means the attacker can test 1015*60*60*24 / 1000 ~= 1017 potential password a day. For a brute force attack, that would mean testing all passwords up to 9 characters in 3 hours, up to 10 characters in a week, up to 11 characters in a year. (It takes 60 times as much for every additional character.) A dictionary attack is much, much faster (even an attacker with a single computer could pull it off in hours), but only finds weak passwords. To log in as a user, the attacker does not need to find out the exact password; it is enough to find a string that results in the same hash. This is called a first preimage attack. As far as I could find, there are no preimage attacks against SHA-1. (A bruteforce attack would take 2160 operations, which means our theoretical attacker would need 1030 years to pull it off. Limits of theoretical possibility are around 260 operations, at which the attack would take a few years.) There are preimage attacks against reduced versions of SHA-1 with negligible effect (for the reduced SHA-1 which uses 44 steps instead of 80, attack time is down from 2160 operations to 2157). There are collision attacks against SHA-1 which are well within theoretical possibility (the best I found brings the time down from 280 to 252), but those are useless against password hashes, even without salting. In short, storing passwords with SHA-1 seems perfectly safe. Did I miss something?

    Read the article

  • Rx IObservable buffering to smooth out bursts of events

    - by Dan
    I have an Observable sequence that produces events in rapid bursts (ie: five events one right after another, then a long delay, then another quick burst of events, etc.). I want to smooth out these bursts by inserting a short delay between events. Imagine the following diagram as an example: Raw: --oooo--------------ooooo-----oo----------------ooo| Buffered: --o--o--o--o--------o--o--o--o--o--o--o---------o--o--o| My current approach is to generate a metronome-like timer via Observable.Interval() that signals when it's ok to pull another event from the raw stream. The problem is that I can't figure out how to then combine that timer with my raw unbuffered observable sequence. IObservable.Zip() is close to doing what I want, but it only works so long as the raw stream is producing events faster than the timer. As soon as there is a significant lull in the raw stream, the timer builds up a series of unwanted events that then immediately pair up with the next burst of events from the raw stream. Ideally, I want an IObservable extension method with the following function signature that produces the bevaior I've outlined above. Now, come to my rescue StackOverflow :) public static IObservable<T> Buffered(this IObservable<T> src, TimeSpan minDelay) PS. I'm brand new to Rx, so my apologies if this is a trivially simple question... 1. Simple yet flawed approach Here's my initial naive and simplistic solution that has quite a few problems: public static IObservable<T> Buffered<T>(this IObservable<T> source, TimeSpan minDelay) { Queue<T> q = new Queue<T>(); source.Subscribe(x => q.Enqueue(x)); return Observable.Interval(minDelay).Where(_ => q.Count > 0).Select(_ => q.Dequeue()); } The first obvious problem with this is that the IDisposable returned by the inner subscription to the raw source is lost and therefore the subscription can't be terminated. Calling Dispose on the IDisposable returned by this method kills the timer, but not the underlying raw event feed that is now needlessly filling the queue with nobody left to pull events from the queue. The second problem is that there's no way for exceptions or end-of-stream notifications to be propogated through from the raw event stream to the buffered stream - they are simply ignored when subscribing to the raw source. And last but not least, now I've got code that wakes up periodically regardless of whether there is actually any work to do, which I'd prefer to avoid in this wonderful new reactive world. 2. Way overly complex appoach To solve the problems encountered in my initial simplistic approach, I wrote a much more complicated function that behaves much like IObservable.Delay() (I used .NET Reflector to read that code and used it as the basis of my function). Unfortunately, a lot of the boilerplate logic such as AnonymousObservable is not publicly accessible outside the system.reactive code, so I had to copy and paste a lot of code. This solution appears to work, but given its complexity, I'm less confident that its bug free. I just can't believe that there isn't a way to accomplish this using some combination of the standard Reactive extensions. I hate feeling like I'm needlessly reinventing the wheel, and the pattern I'm trying to build seems like a fairly standard one.

    Read the article

  • php/dos : How do you parse a regedit export file?

    - by phill
    My objective is to look for Company key-value in the registry hive and then pull the corresponding Guid and other keys and values following it. So I figured i would run the regedit export command and then parse the file with php for the keys I need. So after running the dos batch command >regedit /E "output.txt" "HKLM\System....\Company1" The output textfile seems to be in some kind of UNICODE format which isn't regex friendly. I'm using php to parse the file and pull the keys. Here is the php code i'm using to parse the file <?php $regfile = "output.txt"; $handle = fopen ("c:\\\\" . $regfile,"r"); //echo "handle: " . $file . "<br>"; $row = 1; while ((($data = fgets($handle, 1024)) !== FALSE) ) { $num = count($data); echo "$num fields in line $row: \n"; $reg_section = $data; //$reg_section = "[HKEY_LOCAL_MACHINE\SOFTWARE\TECHNOLOGIES\MEDIUS\CONFIG MANAGER\SYSTEM\COMPANIES\RECORD11]"; $pattern = "/^(\[HKEY_LOCAL_MACHINE\\\SOFTWARE\\\TECHNOLOGIES\\\MEDIUS\\\CONFIG MANAGER\\\SYSTEM\\\COMPANIES\\\RECORD(\d+)\])$/"; if ( preg_match($pattern, $reg_section )) { echo "<font color=red>Found</font><br>"; } else { echo "not found<br>"; echo $data . "<br>"; } $row++; } //end while fclose($handle); ?> and the output looks like this.... 1 fields in line 1: not found ÿþW?i?n?d?o?w?s? ?R?e?g?i?s?t?r?y? ?E?d?i?t?o?r? ?V?e?r?s?i?o?n? ?5?.?0?0? ? 1 fields in line 2: not found 1 fields in line 3: not found [?H?K?E?Y??L?O?C?A?L??M?A?C?H?I?N?E?\?S?O?F?T?W?A?R?E?\?I?N?T?E?R?S?T?A?R? ?T?E?C?H?N?O?L?O?G?I?E?S?\?X?M?E?D?I?U?S?\?C?O?N?F?I?G? ?M?A?N?A?G?E?R?\?S?Y?S?T?E?M?\?C?O?M?P?A?N?I?E?S?]? ? 1 fields in line 4: not found "?N?e?x?t? ?R?e?c?o?r?d? ?I?D?"?=?"?4?1?"? ? 1 fields in line 5: not found Any ideas how to approach this? thanks in advance

    Read the article

  • How to design a high-level application protocol for metadata syncing between devices and server?

    - by Jaanus
    I am looking for guidance on how to best think about designing a high-level application protocol to sync metadata between end-user devices and a server. My goal: the user can interact with the application data on any device, or on the web. The purpose of this protocol is to communicate changes made on one endpoint to other endpoints through the server, and ensure all devices maintain a consistent picture of the application data. If user makes changes on one device or on the web, the protocol will push data to the central repository, from where other devices can pull it. Some other design thoughts: I call it "metadata syncing" because the payloads will be quite small, in the form of object IDs and small metadata about those ID-s. When client endpoints retrieve new metadata over this protocol, they will fetch actual object data from an external source based on this metadata. Fetching the "real" object data is out of scope, I'm only talking about metadata syncing here. Using HTTP for transport and JSON for payload container. The question is basically about how to best design the JSON payload schema. I want this to be easy to implement and maintain on the web and across desktop and mobile devices. The best approach feels to be simple timer- or event-based HTTP request/response without any persistent channels. Also, you should not have a PhD to read it, and I want my spec to fit on 2 pages, not 200. Authentication and security are out of scope for this question: assume that the requests are secure and authenticated. The goal is eventual consistency of data on devices, it is not entirely realtime. For example, user can make changes on one device while being offline. When going online again, user would perform "sync" operation to push local changes and retrieve remote changes. Having said that, the protocol should support both of these modes of operation: Starting from scratch on a device, should be able to pull the whole metadata picture "sync as you go". When looking at the data on two devices side by side and making changes, should be easy to push those changes as short individual messages which the other device can receive near-realtime (subject to when it decides to contact server for sync). As a concrete example, you can think of Dropbox (it is not what I'm working on, but it helps to understand the model): on a range of devices, the user can manage a files and folders—move them around, create new ones, remove old ones etc. And in my context the "metadata" would be the file and folder structure, but not the actual file contents. And metadata fields would be something like file/folder name and time of modification (all devices should see the same time of modification). Another example is IMAP. I have not read the protocol, but my goals (minus actual message bodies) are the same. Feels like there are two grand approaches how this is done: transactional messages. Each change in the system is expressed as delta and endpoints communicate with those deltas. Example: DVCS changesets. REST: communicating the object graph as a whole or in part, without worrying so much about the individual atomic changes. What I would like in the answers: Is there anything important I left out above? Constraints, goals? What is some good background reading on this? (I realize this is what many computer science courses talk about at great length and detail... I am hoping to short-circuit it by looking at some crash course or nuggets.) What are some good examples of such protocols that I could model after, or even use out of box? (I mention Dropbox and IMAP above... I should probably read the IMAP RFC.)

    Read the article

  • Trouble with Code First DatabaseGenerated Composite Primary Key

    - by Nick Fleetwood
    This is a tad complicated, and please, I know all the arguments against natural PK's, so we don't need to have that discussion. using VS2012/MVC4/C#/CodeFirst So, the PK is based on the date and a corresponding digit together. So, a few rows created today would be like this: 20131019 1 20131019 2 And one created tomorrow: 20131020 1 This has to be automatically generated using C# or as a trigger or whatever. The user wouldn't input this. I did come up with a solution, but I'm having problems with it, and I'm a little stuck, hence the question. So, I have a model: public class MainOne { //[Key] //public int ID { get; set; } [Key][Column(Order=1)] [DatabaseGenerated(DatabaseGeneratedOption.None)] public string DocketDate { get; set; } [Key][Column(Order=2)] [DatabaseGenerated(DatabaseGeneratedOption.None)] public string DocketNumber { get; set; } [StringLength(3, ErrorMessage = "Corp Code must be three letters")] public string CorpCode { get; set; } [StringLength(4, ErrorMessage = "Corp Code must be four letters")] public string DocketStatus { get; set; } } After I finish the model, I create a new controller and views using VS2012 scaffolding. Then, what I'm doing is debugging to create the database, then adding the following instead of trigger after Code First creates the DB [I don't know if this is correct procedure]: CREATE TRIGGER AutoIncrement_Trigger ON [dbo].[MainOnes] instead OF INSERT AS BEGIN DECLARE @number INT SELECT @number=COUNT(*) FROM [dbo].[MainOnes] WHERE [DocketDate] = CONVERT(DATE, GETDATE()) INSERT INTO [dbo].[MainOnes] (DocketDate,DocketNumber,CorpCode,DocketStatus) SELECT (CONVERT(DATE, GETDATE ())),(@number+1),inserted.CorpCode,inserted.DocketStatus FROM inserted END And when I try to create a record, this is the error I'm getting: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: The object state cannot be changed. This exception may result from one or more of the primary key properties being set to null. Non-Added objects cannot have null primary key values. See inner exception for details. Now, what's interesting to me, is that after I stop debugging and I start again, everything is perfect. The trigger fired perfectly, so the composite PK is unique and perfect, and the data in other columns is intact. My guess is that EF is confused by the fact that there is seemingly no value for the PK until AFTER an insert command is given. Also, appearing to back this theory, is that when I try to edit on of the rows, in debug, I get the following error: The number of primary key values passed must match number of primary key values defined on the entity. Same error occurs if I try to pull the 'Details' or 'Delete' function. Any solution or ideas on how to pull this off? I'm pretty open to anything, even creating a hidden int PK. But it would seem redundant. EDIT 21OCT13 [HttpPost] public ActionResult Create(MainOne mainone) { if (ModelState.IsValid) { var countId = db.MainOnes.Count(d => d.DocketDate == mainone.DocketNumber); //assuming that the date field already has a value mainone.DocketNumber = countId + 1; //Cannot implicitly convert type int to string db.MainOnes.Add(mainone); db.SaveChanges(); return RedirectToAction("Index"); } return View(mainone); } EDIT 21OCT2013 FINAL CODE SOLUTION For anyone like me, who is constantly searching for clear and complete solutions. if (ModelState.IsValid) { String udate = DateTime.UtcNow.ToString("yyyy-MM-dd"); mainone.DocketDate = udate; var ddate = db.MainOnes.Count(d => d.DocketDate == mainone.DocketDate); //assuming that the date field already has a value mainone.DocketNumber = ddate + 1; db.MainOnes.Add(mainone); db.SaveChanges(); return RedirectToAction("Index"); }

    Read the article

  • No data when attempting to get JSONP data from cross domain PHP script

    - by Alex
    I am trying to pull latitude and longitude values from another server on a different domain using a singe id string. I am not very familiar with JQuery, but it seems to be the easiest way to go about getting around the same origin problem. I am unable to use iframes and I cannot install PHP on the server running this javascript, which is forcing my hand on this. My queries appear to be going through properly, but I am not getting any results back. I was hoping someone here might have an idea that could help, seeing as I probably wouldn't recognize most obvious errors here. My javascript function is: var surl = "http://...omitted.../pull.php"; var idnum = 5a; //in practice this is defined above alert("BEFORE"); $.ajax({ url: surl, data: {id: idnum}, dataType: "jsonp", jsonp : "callback", jsonp: "jsonpcallback", success: function (rdata) { alert(rdata.lat + ", " + rdata.lon); } }); alert("BETWEEN"); function jsonpcallback(rtndata) { alert("CALLED"); alert(rtndata.lat + ", " + rtndata.lon); } alert("AFTER"); When my javascript is run, the BEFORE, BETWEEN and AFTER alerts are displayed. The CALLED and other jsonpcallback alerts are not shown. Is there another way to tell if the jsoncallback function has been called? Below is the PHP code I have running on the second server. I added the count table to my database just so that I can tell when this script is run. Every time I call the javascript, count has had an extra item inserted and the id number is correct. <?php header("content-type: application/json"); if (isset($_GET['id']) || isset($_POST['id'])){ $db_handle = mysql_connect($server, $username, $password); if (!$db_handle) { die('Could not connect: ' . mysql_error()); } $db_found = mysql_select_db($database, $db_handle); if ($db_found) { if (isset($_POST['id'])){ $SQL = sprintf("SELECT * FROM %s WHERE loc_id='%s'", $loctable, mysql_real_escape_string($_POST['id'])); } if (isset($_GET['id'])){ $SQL = sprintf("SELECT * FROM %s WHERE loc_id='%s'", $loctable, mysql_real_escape_string($_GET['id'])); } $result = mysql_query($SQL, $db_handle); $db_field = mysql_fetch_assoc($result); $rtnjsonobj -> lat = $db_field["lat"]; $rtnjsonobj -> lon = $db_field["lon"]; if (isset($_POST['id'])){ echo $_POST['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; } if (isset($_GET['id'])){ echo $_GET['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; } $SQL = sprintf("INSERT INTO count (bullshit) VALUES ('%s')", $_GET['id']); $result = mysql_query($SQL, $db_handle); $db_field = mysql_fetch_assoc($result); } mysql_close($db_handle); } else { $rtnjsonobj -> lat = 404; $rtnjsonobj -> lon = 404; echo $_GET['jsonpcallback']. '('. json_encode($rtnjsonobj) . ')'; }?> I am not entirely sure if the jsonp returned by this PHP is correct. When I go directly to the PHP script without including any parameters, I do get the following. ({"lat":404,"lon":404}) The callback function is not included, but that much can be expected when it isn't included in the original call. Does anyone have any idea what might be going wrong here? Thanks in advance!

    Read the article

  • PHP - Nested Looping Trouble

    - by Jeremy A
    I have an HTML table that I need to populate with the values grabbed from a select statement. The table cols are populated by an array (0,1,2,3). Each of the results from the query will contain a row 'GATE' with a value of (0-3), but there will not be any predictability to those results. One query could pull 4 rows with 'GATE' values of 0,1,2,3, the next query could pull two rows with values of 1 & 2, or 1 & 3. I need to be able to populate this HTML table with values that correspond. So HTML COL 0 would have the TTL_NET_SALES of the db row which also has the GATE value of 0. <?php $gate = array(0,1,2,3); $gate_n = count($gate); /* Database = my_table.ID my_table.TT_NET_SALES my_table.GATE my_table.LOCKED */ $locked = "SELECT * FROM my_table WHERE locked = true"; $locked_n = count($locked); /* EXAMPLE RETURN Row 1: my_table['ID'] = 1 my_table['TTL_NET_SALES'] = 1000 my_table['GATE'] = 1; Row 2: my_table['ID'] = 2 my_table['TTL_NET_SALES'] = 1500 my_table['GATE'] = 3; */ print "<table border='1'>"; print "<tr><td>0</td><td>1</td><td>2</td><td>3</td>"; print "<tr>"; for ($i=0; $i<$locked_n; $i++) { for ($g=0; $g<$gate_n; $g++) { if (!is_null($locked['TTL_NET_SALES'][$i]) && $locked['GATE'][$i] == $gate[$g]) { print "<td>$".$locked['TTL_NET_SALES'][$i]."</td>"; } else { print "<td>-</td>"; } } } print "</tr>"; print "</table>"; /* What I want to see: <table border='1'> <tr> <td>0</td> <td>1</td> <td>2</td> <td>3</td> </tr> <tr> <td>-</td> <td>1000</td> <td>-</td> <td>1500</td> </tr> </table> */ ?>

    Read the article

  • SVN: Working with branches using the same working copy

    - by uXuf
    We've just moved to SVN from CVS. We have a small team and everyone checks in code on the trunk and we have never ever used branches for development. We each have directories on a remote dev server with the codebase checked out. Each developer works on their own sandbox with an associated URL to pull up the app in a browser (something like the setup here: Trade-offs of local vs remote development workflows for a web development team). I've decided that for my current project, I'll use a branch because it would span multiple releases. I've already cut a branch out, but I am using the same directory as the one originally checked out (i.e. for the trunk). Since it's the same directory (or working copy) for both the branch and the trunk, if for e.g. a bug pops up in the app I switch to the trunk and commit the change there, and then switch back to my branch for my project development. My questions are: Is this a sane way to work with branches? Are there any pitfalls that I need to be aware of? What would be the optimal way to work with branches if separate working copies are out of the question? I haven't had issues yet as I have just started doing this way but all the tutorials/books/blog posts I have seen about branching with SVN imply working with different working copies (or perhaps I haven't come across an explanation of mixed working copies in plain English). I just don't want to be sorry three months down the road when its time to integrate the branch back to the trunk.

    Read the article

  • Excel-based Performance Reviews transformed into Web Application for Performance Management

    - by Webgui
    HR TMS provides enterprise talent management solutions for healthcare, retail and corporate customers, focusing on performance management, compensation management and succession planning. As the competency of nurses and other healthcare workers is critical, the government, via the Joint Commission (JCAHO), tightly monitors their performances. On a regular basis, accredited healthcare organizations are required to review employee performance using a complex set of position dependent job descriptions and competencies. Middlesex Hospital managed their performance reviews for 2500 employees manually with Excel spreadsheets. This was a labor intensive process that proved to be error prone and difficult to manage. Reviews were not always where they belonged and the job descriptions and competencies for healthcare workers were difficult to keep accurate and up to date. As a result, when the Joint Commission visited and requested to see specific review documentation, there was intense stress. Middlesex Hospital needed to automate their review process, pull in the position information from those spreadsheets and be able to deliver reviews online. Users needed to have online access to those reviews from a standard browser. Although the manual system had its issues, it did have the advantage of being very comprehensive and familiar to users. The decision was made to provide a web-based solution that leveraged the look and feel of those spreadsheets in order to insure user acceptance of the system and minimize the training needed. Read the full article here >

    Read the article

  • Parsing flat files using SSIS : SSIS Nugget

    - by jamiet
    Often when using SQL Server Integration Services (SSIS) you will find there is more than one way of accomplishing a task and that the most obvious method of doing so might not be the optimal one. In the video below I demonstrate this by way of an experiment using SSIS’s Flat File Source component; I show different ways that you can pull data from a flat file into the SSIS dataflow and also how the nature of the data itself can influence your choice as to how this task should be accomplished. If you are having trouble viewing the video in your blog reader then head to http://sqlblog.com/blogs/jamie_thomson/archive/2010/03/25/parsing-flat-files-using-ssis-ssis-nugget.aspx to see it as it is hosted on my blog!  The main point I want to get across from this video is that a little bit of creative thinking when building your dataflows can sometimes be very beneficial for performance; quite often building a solution that isn’t the most obvious might actually turn out to be the best one. You’ll notice, if you have watched the video, that my editing skills weren’t quite up to snuff and I cut off the final few words however all I was saying was that if you have any feedback on this video then I would love to hear it either via email or preferably the comments section below. I hope this turns out to be useful to some of you. @Jamiet P.S. Incidentally the parsing that we do using SSIS expressions in the video would be much easier if we had a TOKENISE function in SSIS’s expression language and I have asked for the introduction of such a function on Connect at [SSIS] TOKEN(string, tokeniser_string, occurence) function. Feel free to go and vote that up if you think this feature would be useful! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Learn Lean Software Development and Kanban Systems

    - by Ben Griswold
    I did an in-house presentation on Lean Software Development (LSD) and Kanban Systems this week.  Beyond what I had previously learned from various podcasts, I knew little about either topic prior to compiling my slide deck.  In the process of building my presentation, I learned a ton.  I found the concepts weren’t very difficult to grok; however, I found little detailed information was available online. Hence this post which is merely a list of valuable resources. Principles of Lean Thinking, Mary Poppendieck Lean Software Development, May Poppendieck Lean Programming, Mary Poppendieck Lean Software Development, Wikipedia Implementing Lean Software Thinking: From Concept to Cash, Poppendieck Lean Software Development Overview, Darrell Norton Lean Thinking: Banish Waste and Create Wealth in Your Corporation The Goal: A Process of Ongoing Improvement The Toyota Way Extreme Toyota: Radical Contradictions That Drive Success at the World’s Best Manufacturer Elegant Code Cast 17 – David Laribee on Lean / Kanban Herding Code Episode 42: Scott Bellware on BDD and Lean Development Seven Principles of Lean Software Development, Przemys?aw Bielicki Kanban Boards for Agile Project Management with Zen Author Nate Kohari Herding Code 55: Nate Kohari brings Your Moment of Zen James Shore on Kanban Systems Agile Zen Product Site A Leaner Form of Agile, David Laribee Kanban as Alternative Agile Implementation, Mark Levison Lean Software Development, Dr. Christoph Steindl Glossary of Lean Manufacturing Terms Why Pull? Why Kanban?, Corey Ladas

    Read the article

  • I'm Not Bi-Polar, I'm Bi-Winning

    - by David Dorf
    On March 1st, Charlie Sheen joined Twitter and was able to amass 1M followers in 25 hours and 17 minutes, setting an official world record.  So why does it take your brand so long to collect followers?  Easy: you're brand isn't a train wreck.Wouldn't it be great if your customers we chatting about your products as much as they're talking about Charlie #winning?  There are a couple things retailers can do.  First, you can offer check-ins to your customers, which can occasionally get a "ooh, what are you buying there?" in the social network. Another methods is to allow customer to "like" particular products on your Web site.  Companies like Wet Seal excel at that.We've been experimenting with automatic posting from the POS, assuming a customer has opted-in.  When you buy something in a store, the POS can automatically post "Dave just bought something at Wet Seal" to Facebook, Twitter, and Foursquare simultaneously.  We stopped short of mentioning the specific product so we don't pull a Beacon.  The idea is the same: get the conversation started.  Give customers a virtual water-cooler where they can discuss products and influence buying decisions.The guys over at ShopSocially have done something very similar.  On the Facebook page for Cafe Press, customers can claim purchases, effectively bragging on their walls.  Each posting goes through the Facebook newsfeed and gets friends interested.  They are seeing over 1,000 purchases being shared daily, and that's generating over 300,000 brand impressions.Sounds like a winning idea.

    Read the article

  • Aggregate SharePoint Event/Items with Exchange appointments into your Calendar view using Calendar O

    - by eJugnoo
    In continuation of my previous post about using Calendar Overlay with new SharePoint 2010 when you have other Calendar view in any other lists in SharePoint. Now the other option for Overlay we have is with Exchange. You can overlay current users (logged in user) personal Calendar (from Exchange) onto a existing SharePoint calendar, in any list, by using new Overlay feature. Here is an example: Yes, you have to point to your OWA and Exchange WS url. It can also go and find your web service url, when you click find. In my case, it converted machine name into FQDN. That was smart… I had initial configuration issue, that my test users (Administrator!) didn’t have corresponding Exchange e-mail in SharePoint profile. So you have to ensure that your profiles are in sync with AD/Exchange for e-mail. It picks up current user’s e-mail from profile to pull data from Exchange calendar. My calendar in OWA… Same calendar in Outlook 2010… I think, new Calendar Overlay feature fills a great void. Users can now view SharePoint information within context of their personal calendar. Which is simply great! Enjoy new SharePoint 2010. --Sharad

    Read the article

  • Verbosity Isn’t Always a Bad Thing

    - by PSteele
    There was a message posted to the Rhino.Mocks forums yesterday about verifying a single parameter of a method that accepted 5 parameters.  The code looked like this:   [TestMethod] public void ShouldCallTheAvanceServiceWithTheAValidGuid() { _sut.Send(_sampleInput); _avanceInterface.AssertWasCalled(x => x.SendData( Arg<Guid>.Is.Equal(Guid.Empty), Arg<string>.Is.Anything, Arg<string>.Is.Anything, Arg<string>.Is.Anything, Arg<string>.Is.Anything)); } Not the prettiest code, but it does work. I was going to reply that he could use the “GetArgumentsForCallsMadeOn” method to pull out an array that would contain all of the arguments.  A quick check of “args[0]” would be all that he needed.  But then Tim Barcz replied with the following: Just to help allay your fears a bit...this verbosity isn't always a bad thing.  When I read the code, based on the syntax you have used I know that for this particular test no parameters matter except the first...extremely useful in my opinion. An excellent point!  We need to make sure our unit tests are as clear as our code. Technorati Tags: Rhino.Mocks,Unit Testing

    Read the article

  • Apple’s Sep 10th event confirmed. iPhone 5S and low cost iPhone 5C launch is expected

    - by Gopinath
    The much rumored Apple event on September 10th is confirmed. Apple sent official event invitations to media houses and popular bloggers across the globe with the title "This should brighten your day". For the past couple of months there are a lot of speculations on next generation iPhone. Media and bloggers are dubbing it as iPhone 5S and rumored to have finger print sensor for biometric authentication, 12- or 13-megapixel camera with dual-LED flash, and a gold-colored variant. Another speculated surprise Apple may pull out is a low cost variant of iPhone called as iPhone 5C. In order to fight Android penetration, Apple is speculated to announce a plastic iPhone in multiple bold colors similar to the Nokia phones. The new iPhones will be running on iOS 7, a new flat UI which is drastically different from previous versions. iOS 7 is in beta for several months and it heavily barrowed user interface clues from Microsoft’s Windows 8 operating system. What ever Apple is going to introduce on September 10th, gadget freaks and investors are eagerly waiting to see if Apple can continue innovating after Steve Jobs. Since 2011 this is the big launch

    Read the article

  • Formating Columns in Excel created by af:exportCollectionActionListener

    - by Duncan Mills
    The af:exportCollectionActionListener behavior in ADF Faces Rich client provides a very simple way of quickly dumping out the contents or selected rows in a table or treeTable to Excel. However, that simplicity comes at a price as it pretty much left up to Excel how to format the data. A common use case where you have a problem is that of ID columns which are often long numerics. You probably want to represent this data as a string, Excel however will probably have other ideas and render it as an exponent  - not what you intended. In earlier releases of the framework you could sort of work around this by taking advantage of a bug which would allow you to surround the outputText in question with invisible outputText components which provided formatting hints to Excel. Something like this: <af:column headertext="Some wide label">  <af:panelgrouplayout layout="horizontal">     <af:outputtext value="=TEXT(" visible="false">     <af:outputtext value="#{row.bigNumberValue}" rendered="true"/>    <af:outputtext value=",0)" visible="false">   </af:panelgrouplayout> </af:column> However, this bug was fixed and so it can no longer be used as a trick, the export now ignores invisible columns. So, if you really need control over the formatting there are several alternatives: First the more powerful ADF Desktop Integration (ADFdi) package which allows you to build fully transactional spreadsheets that "pull" the data and can update it. This gives you all the control that might need on formatting but it does need specific Excel Add-ins on the client to work. For more information about ADFdi have a look at this tutorial on OTN. Or you can of course look at BI Publisher or Apache POI if you're happy with output only spreadsheets

    Read the article

  • Should we enforce code style in our large codebase?

    - by eighttrackmind
    By "code style" I mean 2 things: Style, eg. // bad if(foo){ ... } // good if (foo) { ... } Conventions and idiomaticity, where two ways of writing the same thing are functionally equivalent, but one is more idiomatic. eg. // bad if (fooLib.equals(a, b)) { ... } // good if (a == b) { ... } I think it makes sense to use an auto-formatter to enforce #1 automatically. So my question is specifically about #2. I like to break things down into pros and cons, here's what I've come up with so far: Pros: Used by many large codebases (eg. Google, jQuery) Helps make it a bit easier to work on new areas of the codebase Helps make code more portable (this is not necessarily true) Code style is automatic once you get used to it Makes it easier to fast-decline pull requests Cons: Takes engineers’ and code reviewers’ time away from more important things (like developing features) Code should ideally be rewritten every 2-3 years anyway, so it’s more important to focus on getting the architecture right, and achieving high test coverage Adds strain to code reviews (eg. “don’t do it this way, I like this other way better”) Even if I’ve been using a code style for a while, I still sometime have to pause and think about how to write a line better Having an enforced, uniform code style makes it hard to experiment with potentially better styles Maintaining a style guide takes a lot of incremental effort Engineers rarely read through the style guide. More often, it's cited in code reviews And as a secondary question: we also have many smaller repositories - should the same code style be enforced there?

    Read the article

  • July, the 31 Days of SQL Server DMO’s - Intro

    - by Tamarick Hill
    DMO’s burst onto the SQL Server scene in 2005 and when they did they unlocked a wealth of information. I’ve became a major fan of DMO’s as they tend to simplify my troubleshooting as well as provide me with valuable information about what is going on within the SQL Server engine. I would recommend that those of you who are not familiar with DMO’s, take the time to really learn more about them. For those of you who may not be familiar with DMO’s, for the month of July, I will be writing about one DMO per day. Don’t get me wrong, I’m no DMO expert or anything like that, but I’ve worked with them enough to feel that I can give you some good information about DMO’s to help you get started with using them. During these blog sessions, I will not be providing you with any complicated queries to solve all of your SQL Server problems that you may or may not have. I will be simply introducing you to various DMO’s and illustrating what type of information they provide. After you learn more about these individually, then you will be able to join whatever DMO’s you need to pull back the information you are seeking. I hope that you all benefit in some form or fashion from my next 31 DMO postings!!! Enjoy!

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >