Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 750/903 | < Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >

  • How should bug tracking and help tickets integrate?

    - by Max Schmeling
    I have a little experience with bug tracking systems such as FogBugz where help tickets are issues are (or can be) bugs, and I have some experience using a bug tracking system internally completely separate from a help center system. My question is, in a company with an existing (home-grown) help center system where replacing it is not an option, how should a bug tracking system (probably Mantis) be integrated into the process? Right now help tickets get put in for issues, questions, etc and they get assigned to the appropriate person (PC Tech, Help Desk staff, or if it's an application issue they can't solve in the help desk it gets assigned to a developer). A user can put a request for small modifications or fixes to an application in a help ticket and the developer it gets assigned to will make the change at some point, apply their time to that ticket, and then close the ticket when it goes to production. We don't currently have a bug tracking system, so I'm looking into the best way to integrate one. Should we just take the help tickets and put it into the bug tracking system if it's a bug (or issue or feature request) and then close the ticket if it's not an emergency fix? We probably don't want to expose the bug tracking system to anyone else as they wouldn't know what to put in the help center system and what to put in the bug tracker... right? Any thoughts? Suggestions? Tips? Advice? To-dos? Not to-dos? etc...

    Read the article

  • Media Kind in iTunes COM for Windows SDK

    - by Joel Verhagen
    I recently found out about the awesomeness of the iTunes COM for Windows SDK. I am using Python with win32com to talk to my iTunes library. Needless to say, my head is in the process of exploding. This API rocks. I have one issue though, how do I access the Media Kind attribute of the track? I looked through the help file provided in the SDK and saw no sign of it. If you go into iTunes, you can modify the track's media kind. This way if you have an audiobook that is showing up in your music library, you can set the Media Kind to Audiobook and it will appear in the Books section in iTunes. Pretty nifty. The reason I ask is because I have a whole crap load of audiobooks that are showing up in my LibraryPlaylist. Here is my code thus far. import win32com.client iTunes = win32com.client.gencache.EnsureDispatch('iTunes.Application') track = win32com.client.CastTo(iTunes.LibraryPlaylist.Tracks.Item(1), 'IITFileOrCDTrack') print track.Artist, '-', track.Name print print 'Is this track an audiobook?' print 'How the hell should I know?' Thanks in advance.

    Read the article

  • jQuery does not return a JSON object to Firebug when JSON is generated by PHP

    - by Keyslinger
    The contents of test.json are: {"foo": "The quick brown fox jumps over the lazy dog.","bar": "ABCDEFG","baz": [52, 97]} When I use the following jQuery.ajax() call to process the static JSON inside test.json, $.ajax({ url: 'test.json', dataType: 'json', data: '', success: function(data) { $('.result').html('<p>' + data.foo + '</p>' + '<p>' + data.baz[1] + '</p>'); } }); I get a JSON object that I can browse in Firebug. However, when using the same ajax call with the URL pointing instead to this php script: <?php $arrCoords = array('foo'=>'The quick brown fox jumps over the lazy dog.','bar'=>'ABCDEFG','baz'=>array(52,97)); echo json_encode($arrCoords); ?> which prints this identical JSON object: {"foo":"The quick brown fox jumps over the lazy dog.","bar":"ABCDEFG","baz":[52,97]} I get the proper output in the browser but Firebug only reveals only HTML. There is no JSON tab present when I expand GET request in Firebug.

    Read the article

  • python-social-auth AuthCanceled exception

    - by vero4ka
    I'm using python-social-auth in my Django application for authentication via Facebook. But when a user tries to login and when it's been refirected to Facebook app page clicks on "Cancel" button, appears the following exception: ERROR 2014-01-03 15:32:15,308 base :: Internal Server Error: /complete/facebook/ Traceback (most recent call last): File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 114, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 57, in wrapped_view return view_func(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/utils.py", line 45, in wrapper return func(request, backend, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/views.py", line 21, in complete redirect_name=REDIRECT_FIELD_NAME, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/actions.py", line 54, in do_complete *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/strategies/base.py", line 62, in complete return self.backend.auth_complete(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 63, in auth_complete self.process_error(self.data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 56, in process_error super(FacebookOAuth2, self).process_error(data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/oauth.py", line 312, in process_error raise AuthCanceled(self, data.get('error_description', '')) AuthCanceled: Authentication process canceled Is the any way to catch it Django?

    Read the article

  • Why does my Delphi program's memory continue to grow?

    - by lkessler
    I am using Delphi 2009 which has the FastMM4 memory manager built into it. My program reads in and processes a large dataset. All memory is freed correctly whenever I clear the dataset or exit the program. It has no memory leaks at all. Using the CurrentMemoryUsage routine given in spenwarr's answer to: http://stackoverflow.com/questions/437683/how-to-get-the-memory-used-by-a-delphi-program, I have displayed the memory used by FastMM4 during processing. What seems to be happening is that memory is use is growing after every process and release cycle. e.g.: 1,456 KB used after starting my program with no dataset. 218,455 KB used after loading a large dataset. 71,994 KB after clearing the dataset completely. If I exit at this point (or any point in my example), no memory leaks are reported. 271,905 KB used after loading the same dataset again. 125,443 KB after clearing the dataset completely. 325,519 KB used after loading the same dataset again. 179,059 KB after clearing the dataset completely. 378,752 KB used after loading the same dataset again. It seems that my program's memory use is growing by about 53,400 KB upon each load/clear cycle. Task Manager confirms that this is actually happening. I have heard that FastMM4 does not always release all of the program's memory back to the Operating system when objects are freed so that it can keep some memory around when it needs more. But this continual growing bothers me. Since no memory leaks are reported, I can't identify a problem. Does anyone know why this is happening, if it is bad, and if there is anything I can or should do about it?

    Read the article

  • Payapl sandbox a/c in Dotnet..IPN Response Invaild

    - by Sam
    Hi, I am Integrating paypal to mysite.. i use sandbox account,One Buyer a/c and one more for seller a/c...and downloaded the below code from paypal site string strSandbox = "https://www.sandbox.paypal.com/cgi-bin/webscr"; HttpWebRequest req = (HttpWebRequest)WebRequest.Create(strSandbox); //Set values for the request back req.Method = "POST"; req.ContentType = "application/x-www-form-urlencoded"; byte[] param = Request.BinaryRead(HttpContext.Current.Request.ContentLength); string strRequest = Encoding.ASCII.GetString(param); strRequest += "&cmd=_notify-validate"; req.ContentLength = strRequest.Length; //for proxy //WebProxy proxy = new WebProxy(new Uri("http://url:port#")); //req.Proxy = proxy; //Send the request to PayPal and get the response StreamWriter streamOut = new StreamWriter(req.GetRequestStream(), System.Text.Encoding.ASCII); streamOut.Write(strRequest); streamOut.Close(); StreamReader streamIn = new StreamReader(req.GetResponse().GetResponseStream()); string strResponse = streamIn.ReadToEnd(); streamIn.Close(); if (strResponse == "VERIFIED") { //check the payment_status is Completed //check that txn_id has not been previously processed //check that receiver_email is your Primary PayPal email //check that payment_amount/payment_currency are correct //process payment } else if (strResponse == "INVALID") { //log for manual investigation } else { //log response/ipn data for manual investigation } and when add this snippets in pageload event of success page i get the ipn response as INVALID but amount paid successfully but i am getting invalid..any help..Paypal Docs in not Clear. thanks in advance

    Read the article

  • Nested URLs and Rewrite rules in Apache2

    - by Radha Krishna. S.
    Hi, I need some help with rewrite rules and nested URLs. I am using TikiWiki for my website and am in the process of setting up SE friendly URLs for my projects. Specifically, I have the following rewrite rule for www.example.com/projects to point to a page that lists out all the projects hosted in example. RewriteRule ^Projects$ articles?type=Project [L] This works fine. Now, I would like to point www.example.com/projects/project1 to point to a specific project. I have this rewrite rule RewriteRule ^(Projects/Project1)$ tiki-read_article.php?articleId=6 This works, but partially. The content is all rendered as text but the theme - images/ css etc all go for a toss - the page is completely in text. I understand that this happens 'cause the relative paths in the theme/ css/ images all refer to Projects as the base folder instead of the root of the website. I don't want to touch the CMS portion - change the theme/ css/ image paths in the files, more for reasons of upgradability. Can someone help me understand and write a rule so that the above nested URL works? Regards, Radha

    Read the article

  • can't write to physical drive in win 7??

    - by matt
    I wrote a disk utility that allowed you to erase whole physical drives. it uses the windows file api, calling : destFile = CreateFile("\\.\PhysicalDrive1", GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING,createflags, NULL); and then just calling WriteFile, and making sure you write in multiples of sectors, i.e. 512 bytes. this worked fine in the past, on XP, and even on the Win7 RC, all you have to do is make sure you are running it as an administrator. but now I have retail Win7 professional, it doesn't work anymore! the drives still open fine for writing, but calling WriteFile on the successfully opened Drive now fails! does anyone know why this might be? could it have something to do with opening it with shared flags? this is always what I have done before, and its worked. could it be that something is now sharing the drive? blocking the writes? is there some way to properly "unmount" A drive, or atleast the partitions on it so that I would have exclusive access to it? some other tools that used to work don't any more either, but some do, like the WD Diagnostic's erase functionality. and after it has erased the drive, my tool then works on it too! leading me to belive there is some "unmount" process I need to be doing to the drive first, to free up permission to write to it. Any ideas?

    Read the article

  • Writing to a log4net FileAppender with multiple threads performance problems

    - by Wayne
    TickZoom is a very high performance app which uses it's own parallelization library and multiple O/S threads for smooth utilization of multi-core computers. The app hits a bottleneck where users need to write information to a LogAppender from separate O/S threads. The FileAppender uses the MinimalLock feature so that each thread can lock and write to the file and then release it for the next thread to write. If MinimalLock gets disabled, log4net reports errors about the file being already locked by another process (thread). A better way for log4net to do this would be to have a single thread that takes care of writing to the FileAppender and any other threads simply add their messages to a queue. In that way, MinimalLock could be disabled to greatly improve performance of logging. Additionally, the application does a lot of CPU intensive work so it will also improve performance to use a separate thread for writing to the file so the CPU never waits on the I/O to complete. So the question is, does log4net already offer this feature? If so, how do you do enable threaded writing to a file? Is there another, more advanced appender, perhaps? If not, then since log4net is already wrapped in the platform, that makes it possible to implement a separate thread and queue for this purpose in the TickZoom code. Sincerely, Wayne

    Read the article

  • Is SQLDataReader slower than using the command line utility sqlcmd?

    - by Andrew
    I was recently advocating to a colleague that we replace some C# code that uses the sqlcmd command line utility with a SqlDataReader. The old code uses: System.Diagnostics.ProcessStartInfo procStartInfo = new System.Diagnostics.ProcessStartInfo("cmd", "/c " + sqlCmd); wher sqlCmd is something like "sqlcmd -S " + serverName + " -y 0 -h-1 -Q " + "\"" + "USE [" + database + "]" + ";+ txtQuery.Text +"\"";\ The results are then parsed using regular expressions. I argued that using a SQLDataReader woud be more in line with industry practices, easier to debug and maintain and probably faster. However, the SQLDataReader approach is at least the same speed and quite possibly slower. I believe I'm doing everything correctly with SQLDataReader. The code is: using (SqlConnection connection = new SqlConnection()) { try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connectionString); connection.ConnectionString = builder.ToString(); ; SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); // do stuff w/ reader reader.Close(); } catch (Exception ex) { outputMessage += (ex.Message); } } I've used System.Diagnostics.Stopwatch to time both approaches and the command line utility (called from C# code) does seem faster (20-40%?). The SqlDataReader has the neat feature that when the same code is called again, it's lightening fast, but for this application we don't anticipate that. I have already done some research on this problem. I note that the command line utility sqlcmd uses OLE DB technology to hit the database. Is that faster than ADO.NET? I'm really suprised, especially since the command line utility approach involves starting up a process. I really thought it would be slower. Any thoughts? Thanks, Dave

    Read the article

  • make local only daemon listening on different interface (using iptables port forwarding)?

    - by UniIsland
    i have a daemon program which listens on 127.0.0.1:8000. i need to access it when i connect to my box with vpn. so i want it to listen on the ppp0 interface too. i've tried the "ssh -L" method. it works, but i don't think it's the right way to do that, having an extra ssh process running in the background. i tried the "netcat" method. it exits when the connection is closed. so not a valid way for "listening". i also tried several iptables rules. none of them worked. i'm not listing here all the rules i've used. iptables -A FORWARD -j ACCEPT iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 8000 -j DNAT --to-destination 127.0.0.1:8000 the above ruleset doesn't work. i have net.ipv4.ip_forward set to 1. anyone knows how to redirect traffic from ppp interface to lo? say, listen on "192.168.45.1:8000 (ppp0)" as well as "127.0.0.1:8000 (lo)" there's no need to alter the port. thanx

    Read the article

  • How to continuously scroll content within a DIV using jQuery?

    - by Camsoft
    Aim The aim is to a have a container DIV with a fixed height and width and have the HTML content within that DIV automatically scroll vertically continuously. Question Basically I've created the code below using jQuery to scroll (move) the child DIV vertically upwards until its outside the bounding parent box where the animation then completes which triggers an event handler which resets the position of the child DIV and starts the process again. This works fine, so the content scrolls up leaving a blank space and then starts from the bottom again and scrolls up. The problem I have is that the requirements for this is for the content to appear as if it was continuously repeating, see below diagram to better explain this, is there a way to do this? (I don't want to use 3rd party plug ins or libraries other than jQuery): What I have so far The HTML: <div id="scrollingContainer"> <div class="scroller"> <h1>This is a title</h1> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse at orci mi, id gravida tellus. Integer malesuada ante sit amet enim pulvinar congue. Donec pulvinar dolor et arcu posuere feugiat id et felis.</p> <p>More content....</p> </div> </div> The CSS: #scrollingContainer{ height: 300px; width: 300px; overflow: hidden; } #scrollingContainer DIV.scroller{ position: relative; } The JavaScript: /** * Scrolls the content DIV */ function scroll() { if($('DIV.scroller').height() > $('#scrollingContainer').height()) { var t = $('DIV.scroller').position().top + $('DIV.scroller').height(); /* Animate */ $('DIV.scroller').animate( { top: '-=' + t + 'px' } , 4000, 'linear', animationComplete); } } function animationComplete() { $(this).css('top', $('#scrollingContainer').height()); scroll(); }

    Read the article

  • Keys and terminology

    - by nabbed
    I have a predicament related to terminology. Our system processes events. Events are dispatched to a node based on the value of some field (or set of fields). We call this set of fields the key. We call the value of that set of fields the key value. What adds confusion is that each event is essentially a bag of key-value pairs (i.e., a hash map). So the word key is used for two different purposes: 1) to describe the set of fields on which the event is dispatched, and 2) as a field name. So if you had a collection of key-value pairs, and a set of those key-value pairs made up a database-style key, what terminology would you use to distinguish those two? (One further complication is that the key on which the event is dispatched is not always unique. For instance, if we dispatch on userid, and that user performs multiple actions, we will process multiple events with the same userid value. So maybe key is the wrong word to describe the set of fields on which we dispatch an event).

    Read the article

  • Eclipse PDE - Plug-in, Feature, and Product Versioning

    - by Michael
    I am having much confusion over the process of upgrading version numbers in dependent plug-ins, features, and products in a fairly large eclipse workspace. I have made API changes to java code residing in an existing plug-in and thus requires an increase of the Major part of the version identifier. This plug-in serves as a dependency to a given feature, where the feature is later included in a product. From the documentation at http://wiki.eclipse.org/Version_Numbering, I understand (for the most part) when the proper number should be increased on the containing plug-in itself. However, how would this Major version number change on the plug-in affect dependent, "down-the-line" items (e.g., features, products)? For example, assume we have the typical "Hello World" setup as follows: Plug-in: com.example.helloworld, version 1.0.0 Feature: com.example.helloworld.feature, version 1.0.0 Product: com.example.helloworld.product, version 1.0.0 If I were to make an API change in the plug-in, this would require a version update to be that of 2.0.0. What would then be the version of the feature, 1.1.0? The same question can be applied for the product level as well (e.g., if the feature is 1.1.0 OR 2.0.0, what is the product version number)? I'm sure this is quite the newbie question so I apologize for wasting anyone's time and effort. I have searched for this type of content but all I am finding is are examples showing how to develop a plug-in, feature, product, and update site for the first time. The only other content related to my search has been developing feature patches and have not touched on the versioning aspect as much as I would prefer. I am having difficulty coming into (for the first time) an Eclipse RCP / PDE environment and need to learn the proper way and / or best practices for making such versioning updates and how to best reflect this throughout other dependent projects in the workspace.

    Read the article

  • Publishing to live website

    - by Alienfluid
    Hey there, My friend and I are collaborating on a ASP.NET powered website. To develop it locally, we use Visual Web Developer Express (good enough for our needs). Subversion (using Tortoise SVN) is our source control of choice with the repository residing on Unfuddle.com. We run into problems when we need to update the live site - since there's no version control on it. Currently we use the "Copy to Website" feature in VWD which copies the files using FTP. Here are some problems: VWD only keeps track of files uploaded by one user, so if the other user uploads a newer version of a file to the live site, VWD on my side cannot tell whether the live version of the file is newer or mine is. There's no way to tell whether all the latest changes are available on the live site. We have to be careful not to party all over the shared web.config file since the other user's local DB settings are different from mine, and of course, the live DB settings are a whole other story! What do you guys use to publish to a live site? Does anything out there tie into Subversion so that we can automate the process and always guarantee that the live site is synced to a change list number? Also, how do you manage the different web.config file settings? Thanks!

    Read the article

  • Can this method to convert a name to proper case be improved?

    - by Kelsey
    I am writing a basic function to convert millions of names (one time batch process) from their current form, which is all upper case, to a proper mixed case. I came up with the following so far: public string ConvertToProperNameCase(string input) { TextInfo textInfo = new CultureInfo("en-US", false).TextInfo; char[] chars = textInfo.ToTitleCase(input.ToLower()).ToCharArray(); for (int i = 0; i + 1 < chars.Length; i++) { if ((chars[i].Equals('\'')) || (chars[i].Equals('-'))) { chars[i + 1] = Char.ToUpper(chars[i + 1]); } } return new string(chars);; } It works in most cases such as: JOHN SMITH - John Smith SMITH, JOHN T - Smith, John T JOHN O'BRIAN - John O'Brian JOHN DOE-SMITH - John Doe-Smith There are some edge cases that do no work like: JASON MCDONALD - Jason Mcdonald (Correct: Jason McDonald) OSCAR DE LA HOYA - Oscar De La Hoya (Correct: Oscar de la Hoya) MARIE DIFRANCO - Marie Difranco (Correct: Marie DiFranco) These are not captured and I am not sure if I can handle all these odd edge cases. Can anyone think of anything I could change or add to capture more edge case? I am sure there are tons of edge cases I am not even thinking of as well. All casing should following North American conventions too meaning that if certain countries expect a specific capitalization format, and that differs from the North American format, then the North American format takes precedence.

    Read the article

  • Google Analytics API Authentication Speedup

    - by Paulo
    I'm using a Google Analytics API Class in PHP made by Doug Tan to retrieve Analytics data from a specific profile. Check the url here: http://code.google.com/intl/nl/apis/analytics/docs/gdata/gdataArticlesCode.html When you create a new instance of the class you can add the profile id, your google account + password, a daterange and whatever dimensions and metrics you want to pick up from analytics. For example i want to see how many people visited my website from different country's in 2009. //make a new instance from the class $ga = new GoogleAnalytics($email,$password); //website profile example id $ga->setProfile('ga:4329539'); //date range $ga->setDateRange('2010-02-01','2010-03-08'); //array to receive data from metrics and dimensions $array = $ga->getReport( array('dimensions'=>('ga:country'), 'metrics'=>('ga:visits'), 'sort'=>'-ga:visits' ) ); Now you know how this API class works, i'd like to adress my problem. Speed. It takes alot of time to retrieve multiple types of data from the analytics database, especially if you're building different arrays with different metrics/dimensions. How can i speed up this process? Is it possible to store all the possible data in a cache so i am able to retrieve the data without loading it over and over again?

    Read the article

  • Regarding Toplink Fetching Policy

    - by Chandu
    Hi, I'm working for a Swing Project and the technologies used are netbeans with Toplink essentials, mysql. The Problem I'm facing is the entity object dosn't get updated after insertions take place while calling a getter collection of the foreign key property. Ex: I have 2 tables Table1,Table2. I have sno column, id column as a primary key in Table1 & is Foreign Key in Table2. Through find method I just get the particular sno object(existed in table 1) set some values persisted to table2 & committed the transaction. When I select the same sno object through find method & gets its collection from Table2 through the getTable2Collection() of the bean(as it is already created in bean by toplink essential) I'm unable to get the latest added record except that all other records of it are displayed. After I close the application & opening it then the new record gets reflected while calling the same sno through the above process. I came to know that this is a kind of lazy fetching and there should be some way of fetch policy to be changed to make the entity object get updated with the changes. So Please help me in this regard. Regards, Chandu

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • Entity Framework Update Error in ASP.NET Mvc with related entity

    - by Barry
    I have run into a problem which have searched and tried everything i can to find a solution but to no avail. I am using the same repository and context throughout the process I have a booking entity and a userExtension Entity Below is my image i then get my form collection back from my page and create a new booking public ActionResult Create(FormCollection collection) { Booking toBooking = new Booking(); i then do some validation and property assignment and find an associated BidInstance toBooking.BidInstance = bid; i have checked and the bid is not null. finally i get the user extension file from the Current IPRINCIPAL USER as below UserExtension loggedInUser = m_BookingRepository.GetBookingCurrentUser(User); toBooking.UserExtension = loggedInUser; The Code to do the getUserExtension is : public UserExtension GetBookingCurrentUser(IPrincipal currentUser) { var user = (from u in Context.aspnet_Users .Include("UserExtension") where u.UserName == currentUser.Identity.Name select u).FirstOrDefault(); if (user != null) { var userextension = (from u in Context.UserExtension.Include("aspnet_Users") where u.aspnet_Users.UserId == user.UserId select u).FirstOrDefault(); return userextension; } else{ return null; } } It returns the userextension fine and assigns it fine. i originally used the aspnet_users but encountered this problem so tried to change it to the extension entity. as soon as i call the : Context.AddToBooking(booking); Context.SaveChanges(); i get the following exception and im completely baffled by how to fix it Entities in 'FutureFlyersEntityModel.Booking' participate in the 'FK_Booking_UserExtension' relationship. 0 related 'UserExtension' were found. 1 'UserExtension' is expected. then the final error that comes to the front end is: Metadata information for the relationship 'FutureFlyersModel.FK_Booking_BidInstance' could not be retrieved. Make sure that the EdmRelationshipAttribute for the relationship has been defined in the assembly. Parameter name: relationshipName.. But both the related entities are set in the booking entity passed thruogh PLEASE HELP Im at wits end with this

    Read the article

  • Automatically stashing

    - by Readonly
    The section Last links in the chain: Stashing and the reflog in http://ftp.newartisans.com/pub/git.from.bottom.up.pdf recommends stashing often to take snapshots of your work in progress. The author goes as far as recommending that you can use a cron job to stash your work regularly, without having to do a stash manually. The beauty of stash is that it lets you apply unobtrusive version control to your working process itself: namely, the various stages of your working tree from day to day. You can even use stash on a regular basis if you like, with something like the following snapshot script: $ cat <<EOF > /usr/local/bin/git-snapshot #!/bin/sh git stash && git stash apply EOF $ chmod +x $_ $ git snapshot There’s no reason you couldn’t run this from a cron job every hour, along with running the reflog expire command every week or month. The problem with this approach is: If there are no changes to your working copy, the "git stash apply" will cause your last stash to be applied over your working copy. There could be race conditions between when the cron job executes and the user working on the working copy. For example, "git stash" runs, then the user opens the file, then the script's "git stash apply" is executed. Does anybody have suggestions for making this automatic stashing work more reliably?

    Read the article

  • Want to learn Objective-C but syntax is very confusing

    - by Sahat
    Coming from Java background I am guessing this is expected. I would really love to learn Objective-C and start developing Mac apps, but the syntax is just killing me. For example: -(void) setNumerator: (int) n { numerator = n; } What is that dash for and why is followed by void in parenthesis? I've never seen void in parenthesis in C/C++, Java or C#. Why don't we have a semicolon after (int) n? But we do have it here: -(void) setNumerator: (int) n; And what's with this alloc, init, release process? myFraction = [Fraction alloc]; myFraction = [myFraction init]; [myFraction release]; And why is it [myFraction release]; and not myFraction = [myFraction release]; ? And lastly what's with the @ signs and what's this implementation equivalent in Java? @implementation Fraction @end I am currently reading Programming in Objective C 2.0 and it's just so frustrating learning this new syntax for someone in Java background.

    Read the article

  • SVN commit using cruise control

    - by pratap
    hi all, can any one tell how to tell svn that these files are to be deleted from repository through command line. i am using cruise control to automate the svn commit process. but the execution of svn commit command restores the files which i deleted from my working copy. the way i am doing is. 1. delete some files in my working copy.( no. of files in my WC is less than no. of files in repository) 2. execute svn command using cruise control. <exec executable="svn.exe"> <buildArgs>ci -m "test msg" --no-auth-cache --non-interactive</buildArgs> <buildTimeoutSeconds>1000</buildTimeoutSeconds> </exec> result: the deleted files are restored in my WC... Can someone help me in figuring out where i have gone wrong... or if i have to do some changes / configurations... thank u all. regards. uday

    Read the article

  • SVN commit using cruise control

    - by pratap
    hi all, i am using cruise control to automate the svn commit process. but the execution of svn commit command restores the files which i deleted from my working copy. the way i am doing is. 1. delete some files in my working copy.( no. of files in my WC is less than no. of files in repository) 2. execute svn command using cruise control. <exec executable="svn.exe"> <buildArgs>ci -m "test msg" --no-auth-cache --non-interactive</buildArgs> <buildTimeoutSeconds>1000</buildTimeoutSeconds> </exec> result: the deleted files are restored in my WC... Can someone help me in figuring out where i have gone wrong... or if i have to do some changes / configurations... thank u all. regards. uday

    Read the article

  • XML document being parsed as single element instead of sequence of nodes

    - by Rob Carr
    Given xml that looks like this: <Store> <foo> <book> <isbn>123456</isbn> </book> <title>XYZ</title> <checkout>no</checkout> </foo> <bar> <book> <isbn>7890</isbn> </book> <title>XYZ2</title> <checkout>yes</checkout> </bar> </Store> I am getting this as my parsed xmldoc: >>> from xml.dom import minidom >>> xmldoc = minidom.parse('bar.xml') >>> xmldoc.toxml() u'<?xml version="1.0" ?><Store>\n<foo>\n<book>\n<isbn>123456</isbn>\n</book>\n<t itle>XYZ</title>\n<checkout>no</checkout>\n</foo>\n<bar>\n<book>\n<isbn>7890</is bn>\n</book>\n<title>XYZ2</title>\n<checkout>yes</checkout>\n</bar>\n</Store>' Is there an easy way to pre-process this document so that when it is parsed, it isn't parsed as a single xml element?

    Read the article

< Previous Page | 746 747 748 749 750 751 752 753 754 755 756 757  | Next Page >