Search Results

Search found 15906 results on 637 pages for 'scott and the dev team'.

Page 567/637 | < Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >

  • How do you filter a view of a DataTable in .Net 3.5 sp1 using WPF c# and xaml?

    - by Tony
    I found the MSDN example code for getting the default view of a collection and adding a filter to the view, but most of it is for .Net 4.0. I'm on a team that is not currently switching to 4.0, so I don't have that option. None of the examples I found used a DataTable as the source, so I had to adapt it a little. I'm using a DataTable because the data is comming from a DB ans it's easy to populate. After trying to implement the MSDN examples, I get a "NotSupportedException" when I try to set the Filter. This is the c# code I have: protected DataTable _data = new DataTable(); protected BindingListCollectionView _filteredDataView; ... private void On_Loaded(Object sender, RoutedEventArgs e) { _filteredDataView = (BindingListCollectionView)CollectionViewSource.GetDefaultView(_data); _filteredDataView.Filter = new Predicate(MatchesCurrentSelections); // throws NotSupportedException } ... public bool MatchesCurrentSelections(object o){...} It seems that either BindingListCollectionView does not support filtering in .Net 3.5, or it just doesn't work for a DataTable. I looked at setting it up in XAML instead of the C# code, but the XAML examples use collections in resources instead of a collection that is a memberof the class, so I have no idea how to set that up. Does any one know how to filter a view to a DataTable?

    Read the article

  • OpenGL Calls Lock/Freeze

    - by Necrolis
    I am using some dell workstations(running WinXP Pro SP 2 & DeepFreeze) for development, but something was recenlty loaded onto these machines that prevents any opengl call(the call locks) from completing(and I know the code works as I have tested it on 'clean' machines, I also tested with simple opengl apps generated by dev-cpp, which will also lock on the dell machines). I have tried to debug my own apps to see where exactly the gl calls freeze, but there is some global system hook on ZwQueryInformationProcess that messes up calls to ZwQueryInformationThread(used by ExitThread), preventing me from debugging at all(it causes the debugger, OllyDBG, to go into an access violation reporting loop or the program to crash if the exception is passed along). the hook: ntdll.ZwQueryInformationProcess 7C90D7E0 B8 9A000000 MOV EAX,9A 7C90D7E5 BA 0003FE7F MOV EDX,7FFE0300 7C90D7EA FF12 CALL DWORD PTR DS:[EDX] 7C90D7EC - E9 0F28448D JMP 09D50000 7C90D7F1 9B WAIT 7C90D7F2 0000 ADD BYTE PTR DS:[EAX],AL 7C90D7F4 00BA 0003FE7F ADD BYTE PTR DS:[EDX+7FFE0300],BH 7C90D7FA FF12 CALL DWORD PTR DS:[EDX] 7C90D7FC C2 1400 RETN 14 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C the messed up function + call: ntdll.ZwQueryInformationThread 7C90D7F0 8D9B 000000BA LEA EBX,DWORD PTR DS:[EBX+BA000000] 7C90D7F6 0003 ADD BYTE PTR DS:[EBX],AL 7C90D7F8 FE ??? ; Unknown command 7C90D7F9 7F FF JG SHORT ntdll.7C90D7FA 7C90D7FB 12C2 ADC AL,DL 7C90D7FD 14 00 ADC AL,0 7C90D7FF 90 NOP ntdll.ZwQueryInformationToken 7C90D800 B8 9C000000 MOV EAX,9C So firstly, anyone know what if anything would lead to OpenGL calls cause an infinite lock,and if there are any ways around it? and what would be creating such a hook in kernal memory ? Update: After some more fiddling, I have discovered a few more kernal hooks, a lot of them are used to nullify data returned by system information calls(such as the remote debugging port), I also managed to find out the what ever is doing this is using madchook.dll(by madshi) to do this, this dll is also injected into every running process(these seem to be some anti debugging code). Also, on the OpenGL side, it seems Direct X is fine/unaffected(I ran one of the DX 9 demo's without problems), so could one of these kernal hooks somehow affect OpenGL?

    Read the article

  • Project Management and Scheduling Techniques

    - by Alec Smart
    Hello, I know this is probably the nth project management question. But am trying to move my team onto a more robust project management technique. Am wondering what is the best technique to use? I know that probably no technique is best, but which are the most popular techniques? Poker planning? Evidence Based Scheduling? COCOMO? Agile? Scrum? XP? Which one to use? Also, suppose I use EBS, wouldn't it be too time consuming to break down every single activity into fine grained tasks? E.g. "Design" is a goal, what kind of fine-grained tasks will I have under it? Is this is a waste of time i.e. dividing work into so many micro parts. Usually when I give my programmers a task, I follow up every week, and they complete quite a lot of the task assigned to them (the tasks are very broad e.g. X module). Is EBS worth it? Are there any white-papers on it so that I can implement it on my own? (instead of using Fogbugz) Most of my projects are web-based projects. Thank you for your time.

    Read the article

  • Javascript function to add class to a list element based on # in url.

    - by Jason
    I am trying to create a javascript function to add and remove a class to a list element based on the #tag at the end of the url on a page. The page has several different states, each with a different # in the url. I am currently using this script to change the style of a given element based on the # in the url when the user first loads the page, however if the user navigates to a different section of the page the style added on the page load stays, I would like it to change. <script type="text/javascript"> var hash=location.hash.substring(1); if (hash == 'strategy'){ document.getElementById('strategy_link').style.backgroundPosition ="-50px"; } if (hash == 'branding'){ document.getElementById('branding_link').style.backgroundPosition ="-50px"; } if (hash == 'marketing'){ document.getElementById('marketing_link').style.backgroundPosition ="-50px"; } if (hash == 'media'){ document.getElementById('media_link').style.backgroundPosition ="-50px"; } if (hash == 'management'){ document.getElementById('mangement_link').style.backgroundPosition ="-50px"; } if (hash == ''){ document.getElementById('shop1').style.display ="block"; } </script> Additionally, I am using a function to change the class of the element onClick, but when a user comes to a specific # on the page directly from another page and then clicks to a different location, two elements appear active. <script type="text/javascript"> function selectInList(obj) { $("#circularMenu").children("li").removeClass("highlight"); $(obj).addClass("highlight"); } </script> You can see this here: http://www.perksconsulting.com/dev/capabilities.php#branding Thanks.

    Read the article

  • IIS site always returns 404 to WinMo emulator

    - by Derick Bailey
    I'm running Win7x64 Ultimate with Visual Studio 2008. I have a website built in ASP.NET 3.5 and hosted via IIS on my box. I can run the website perfectly fine and I can hit all of the web services that I have built in the website, using a web browser. When I pull up my Windows Mobile 6 emulator and hit the site (using my IP address) it always returns a 404 error. I have the emulator cradled w/ Device Emulator Manager and I can interact with the emulated device normally. I am also able to get out to google.com and other websites w/ the emulated device. I have also verified that the emulator is hitting my box by stopping the IIS website and seeing that the WinMo emulator cannot get any response. Then when I start the site again, I get a 404 error. When I pull up my site on my local dev box via FireFox or IE using the IP address it works perfectly fine. The worst part is this worked perfectly fine a few weeks ago, when I used it last. I don't know that I've changed anything since then - I'm just trying to use the emulator to hit my site again. Help?! Update: my http requests comign from the WinMo emulator are not getting logged in the IIS log files, while my requests from FireFox on my local box are getting logged. Not sure if that helps in figuring out the problem... Update 2: I can use the ruby Webbrick server on my local box and hit that server from my emulator just fine. is in IIS not allowing me to hit the IIS site from the emu? UPdate 3: I cradled an actual WinMo device to my box with it's networking turned off and was able to hit the IIS site just fine. that makes me think it's something set up wrong in the emulator.

    Read the article

  • Optimize website for touch devices

    - by gregers
    On a touch device like iPhone/iPad/Android it can be difficult to hit a small button with your finger. There is no cross-browser way to detect touch devices with CSS media queries that I know of. So I check if the browser has support for javascript touch events. Until now, other browsers haven't supported them, but the latest Google Chrome on dev channel enabled touch events (even for non touch devices). And I suspect other browser makers will follow, since laptops with touch screens are comming. This is the test I use: function isTouchDevice() { try { document.createEvent("TouchEvent"); return true; } catch (e) { return false; } } The problem is that this only tests if the browser has support for touch events, not the device. Does anyone know of The Correct[tm] way of giving touch devices better user experience? Other than sniffing user agent. Mozilla has a media query for touch devices. But I haven't seen anything like it in any other browser: https://developer.mozilla.org/En/CSS/Media_queries#-moz-touch-enabled Update: I want to avoid using a separate page/site for mobile/touch devices. The solution has to detect touch devices with object detection or similar from JavaScript, or include a custom touch-CSS without user agent sniffing! The main reason I asked, was to make sure it's not possible today, before I contact the css3 working group.

    Read the article

  • Import-Pssession is not importing cmdlets when used in a custom module

    - by Douglas Plumley
    I have a PowerShell script/function that works great when I use it in my PowerShell profile or manually copy/paste the function in the PowerShell window. I'm trying to make the function accessible to other members of my team as a module. I want to have the module stored in a central place so we can all add it to our PSModulePath. Here is a copy of the basic function: Function Connect-O365{ $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } If I save this function in my PowerShell profile it works fine. I can dot source a *.ps1 script with this function in it and it works as well. The issue is when I save the function as a *.psm1 PowerShell script module. The function runs fine but none of the exported commands from the Import-PSSession are available. I think this may have something to do with the module scope. I'm looking for suggestions on how to get around this. EDIT When I create the following module and run Connect-O365 the imported cmdlets will not be available. $scriptblock = { Function Connect-O365{ $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://ps.outlook.com/powershell/" -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } } New-Module -Name "Office 365" -ScriptBlock $scriptblock When I import the next module without the Connect-O365 function the imported cmdlets are available. $scriptblock = { $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://ps.outlook.com/powershell/" -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } New-Module -Name "Office 365" -ScriptBlock $scriptblock This appears to be a scope issue of some sort, just not sure how to get around it.

    Read the article

  • Internationalizing a Python 2.6 application via Babel

    - by Malcolm
    We're evaluating Babel 0.9.5 [1] under Windows for use with Python 2.6 and have the following questions that we we've been unable to answer through reading the documentation or googling. 1) I would like to use an _ like abbreviation for ungettext. Is there a concencus on whether one should use n_ or N_ for this? n_ does not appear to work. Babel does not extract text. N_ appears to partially work. Babel extracts text like it does for gettext, but does not format for ngettext (missing plural argument and msgstr[ n ].) 2) Is there a way to set the initial msgstr fields like the following when creating a POT file? I suspect there may be a way to do this via Babel cfg files, but I've been unable to find documentation on the Babel cfg file format. "Project-Id-Version: PROJECT VERSION\n" "Language-Team: en_US \n" 3) Is there a way to preserve 'obsolete' msgid/msgstr's in our PO files? When I use the Babel update command, newly created obsolete strings are marked with #~ prefixes, but existing obsolete message strings get deleted. Thanks, Malcolm [1] http://babel.edgewall.org/

    Read the article

  • Ditching Django's models for Ajax/Web Services

    - by Igor Ganapolsky
    Recently I came across a problem at work that made me rethink Django's models. The app I am developing resides on a Linux server. Its a simple model/view/controller app that involves user interaction and updating data in the database. The problem is that this data resides in a MS SQL database on a Windows machine. So in order to use Django's models, I would have to leverage an ODBC driver on linux, and the use a python add-on like pyodbc. Well, let me tell you, setting up a reliable and functional ODBC connection on linux is no easy feat! So much so, that I spent several hours maneuvering this on my CentOS with no luck, and was left with frustration and lots of dumb system errors. In the meantime I have a deadline to meet, and suddenly the very agile and rapid Django application is a roadblock rather than a pleasure to work with. Someone on my team suggested writing this app in .NET. But there are a few problems with that: it won't be deployable on a linux machine, and I won't be able to work on it since I don't know ASP.net. Then a much better suggestion was made: keep the app in django, but instead of using models, do straight up ajax/web services calls in the template. And then it dawned on me - what a great idea. Django's models seem like a nuissance and hindrance in this case, and I can just have someone else write .Net services on their side, that I can call from my template. As a result my app will be leaner and more compact. So, I was wondering if you guys ever came across a similar dillema and what you decided to do about it.

    Read the article

  • Porting some PHP to ColdFusion

    - by Oscar Godson
    OK, I'm working with converting some very basic PHP to port to a dev server where the client only has CF. Ive never worked with it, and I just need to know how to port a couple things: <?php $pageTitle = 'The City That Works'; $mainCSSURL = 'header_url=../images/banner-home.jpg&amp;second_color=484848&amp;primary_color=333&amp;link_color=09c&amp;sidebar_color=f2f2f2'; require('includes/header-inc.php'); ?> I know: <cfinclude template="includes/header-inc.cfm"> but how to i get the var to be passed to the include and then how do I use it on the subsequent included file? Also in my CSS (main.php) I have (at the top): <?php header('Content-type: text/css'); foreach($_GET as $css_property => $css_value) {define(strtoupper($css_property),$css_value);} ?> and im using those constants like this: #main-content a {color:#<?= LINK_COLOR ?>;} How can I get that to work also with CF? Never thought I'd be working with CF :)

    Read the article

  • Using include() to load different page content acts differently locally vs hosted

    - by hookedonwinter
    I have a live site that includes different php files depending on what page the user is trying to access. The header and footer are the same, but in the file, if the user requests filename1.php vs filename2.php, a different php is loaded into the content of the page. Basic CMS stuff. On the live site, it works fine. I just set up a local dev environment, and it doesn't work. The file that is supposed to load into the middle of the page instead is the only file loaded. I'm not saying this well. Here's an example: How it works live: <html> <head> Stuff </head> <body> More stuff <? include( 'some_file.php' ); ?> </body> </html> How it works locally: <? include( 'some_file.php' ); ?> Just that file loads, no other content. Any thoughts on why that one page is loading, but not the surrounding content? If I'm not explaining this well, please let me know.

    Read the article

  • Unable to change URL for .NET reference to dynamic web service

    - by Malvineous
    Hi all, I have a web reference added to a C# .NET project. The URL for the web reference needs to change depending on whether I'm building for a development, staging or production environment. I've set the web service to be dynamic, which supposedly means it takes the URL from my app.config file. When I perform a build it overwrites the app.config with the required file which contains the correct URL (different file for each of dev/staging/production.) I then go into the solution properties and make sure the Settings.settings file is updated with the app.config changes. However when I view the properties for the web service, it is still showing the old URL, despite it being dynamic, and supposed to be reading from my settings file (even after closing and reopening the project/solution.) The app.config and the settings file all have the new URL, but the web reference doesn't notice it has changed. If I do a build it ignores the URL in the settings file and tries to connect to the last URL manually typed into the web reference's properties. Typing a URL into these properties correctly updates the app.config and .settings files, so the link is definitely there. I'm a bit new to .NET but it seems to me the purpose of setting the service to be dynamic is so that you can change the URL elsewhere, but when I do this it just gets ignored! Am I doing something wrong?

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • A standard set of questions to ask an interviewer?

    - by Rob Wells
    We have had many questions for interviewers to ask interviewees. But none addressing information flow in the other direction, interviewee to interviewer. Just an indirect question about "deal breakers" and one about "finding dream jobs". What I'm after is when you're interviewing at a company do you have a set of questions that you like to ask to help get a feel for the company and the work environment? I have a series of questions that I like to ask that range from the development environment to testing techniques to how the team get on together. Anything else you'd like to ask? Edit: I moved my original list of interviewer questions to my answer below. I've also gone through the other answers and added the ones thought were useful in to that answer. The answer is community wiki so feel free to add anything useful. N.B. This is my first cut of categories. Feel free to modify/add/etc. the categories. Or to recategorise the questions themselves.

    Read the article

  • Django Database design -- Is this a good stragety for overriding defaults

    - by rh0dium
    Hi SO's I have a question on good database design practices and I would like to leverage you guys for pointers. The project started out simple. Hey we have a bunch of questions we want answered for every project (no problem) Which turned into... Hey we have so many questions can we group them into sections (yup we can do that) Which lead into.. Can we weight these questions and I don't really want some of these questions for my project (Yes but we are getting difficult) And then I'm thinking they will want to have each section have it's own weight.. Requirements So there's the requirements - For n number of project Allow a admin member the ability select the questions for a project Allow the admin member to re-weigh or use the default weights for the questions Allow the admin member to re-weight the sections Allow team members to answer the questions. So here is what I came up with. Please feel free to comment and provide better examples models.py from django.db import models from django.contrib.sites.models import Site from django.conf import settings class Section(models.Model): """ This describes the various sections for a checklist: """ name = models.CharField(max_length=64) description = models.TextField() class Question(models.Model): """ This simply provides a simple way to list out the questions. """ question = models.CharField(max_length=255) answer_type = models.CharField(max_length=16) description = models.TextField() section = models.ForeignKey(Section) class ProjectQuestion(models.Model): """ These are the questions relevant to the project """ question = models.ForeignKey(Question) answer = models.CharField(max_length=255) required = models.BooleanField(default=True) weight = models.FloatField(default = XXX) class Project(models.Model): """ Here is where we want to gather our questions """ questions = models.ManyToManyField(ProjectQuestion) Immediate questions: - When I start a project - any ideas on how to "pre-populate" the questions (and ultimately the weights) for the project? - Is there a generally accepted method for doing this process that I am missing? Basically the idea that you refer to the questions overide your own default weight, and store the answer? - It appears that a good chuck of the work will be done in the views and that a lot of checking will need to occur there? Is that OK? Again - feel free to give me better strategies!! Thanks

    Read the article

  • ASP.NET Webforms site using HTTPCookie with 100 year timeout times out after 20 minutes

    - by Rob
    I have a site that is using Forms Auth. The client does not want the site session to expire at all for users. In the login page codebehind, the following code is used: // user passed validation FormsAuthentication.Initialize(); // grab the user's roles out of the database String strRole = AssignRoles(UserName.Text); // creates forms auth ticket with expiration date of 100 years from now and make it persistent FormsAuthenticationTicket fat = new FormsAuthenticationTicket(1, UserName.Text, DateTime.Now, DateTime.Now.AddYears(100), true, strRole, FormsAuthentication.FormsCookiePath); // create a cookie and throw the ticket in there, set expiration date to 100 years from now HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, FormsAuthentication.Encrypt(fat)) { Expires = DateTime.Now.AddYears(100) }; // add the cookie to the response queue Response.Cookies.Add(cookie); Response.Redirect(FormsAuthentication.GetRedirectUrl(UserName.Text, false)); The web.config file auth section looks like this: <authentication mode="Forms"> <forms name="APLOnlineCompliance" loginUrl="~/Login.aspx" defaultUrl="~/Course/CourseViewer.aspx" /> </authentication> When I log into the site I do see the cookie correctly being sent to the browser and passed back up: However, when I walk away for 20 minutes or so, come back and try to do anything on the site, the login window reappears. This solution was working for a while on our servers - now it's back. The problem doesn't occur on my local dev box running Cassini in VS2008. Any ideas on how to fix this?

    Read the article

  • Cannot send email in ASP.NET through Godaddy servers.

    - by Jared
    I have an ASP.NET application hosted on Godaddy that I want to send email from. When it runs, I get: Mailbox name not allowed. The server response was: sorry, relaying denied from your location. The important parts of the code and Web.config are below: msg = new MailMessage("[email protected]", email); msg.Subject = "GreekTools Registration"; msg.Body = "You have been invited by your organization to register for the GreekTools recruitment application.<br/><br/>" + url + "<br/><br/>" + "Sincerely,<br/>" + "The GreekTools Team"; msg.IsBodyHtml = true; client = new SmtpClient(); client.Host = "relay-hosting.secureserver.net"; client.Send(msg); <system.net> <mailSettings> <smtp from="[email protected]"> <network host="relay-hosting.secureserver.net" port="25" userName="********" password="*********" /> </smtp> </mailSettings>

    Read the article

  • m2eclipse: How to set Eclipse project settings when importing a maven project?

    - by Marius Andreiana
    Using m2eclipse Eclipse plugin, everybody on the dev team should be able to checkout source code, import Maven project in Eclipse and be good to go. I saw m2eclipse is being merged into Eclipse 3.7, and maven-eclipse-plugin won't be maintained any longer, so I'm looking for a m2eclipse-based solution (without running "mvn eclipse:clean eclipse:eclipse" before project import, which is what maven-eclipse-plugin does). maven-eclipse-plugin allows this in pom.xml <additionalConfig> <file> <name>.settings/com.google.gdt.eclipse.core.prefs</name> <content><![CDATA[ eclipse.preferences.version=2 jarsExcludedFromWebInfLib= warSrcDir=${project.build.directory}/${project.build.finalName} warSrcDirIsOutput=true ]]> </content> </file> The more general question is How would m2eclipse do something similar? For some cases, just saving the eclipse .settings/prefs file works (e.g. org.eclipse.jdt.ui.prefs), but in this case, com.google.gdt.eclipse.core.prefs is always overwritten on m2eclipse project import. A specific question is asked here, with no reply. Thanks! UPDATE: Not possible now, see request

    Read the article

  • Eclipse - Import existing mult-rep CVS project folder

    - by iQ
    Hey guys, Wondering if anyone can help me out with eclipse in terms of importing an existing CVS managed project. I am currently trying to shift my work on to the eclipse IDE. Some details about my project and environment below. I'm working in Linux Ubuntu, the project folder is located on a mounted shared network drive, I have installed the "Eclipse CVS Client" plug-in for my version of eclipse (helios). I've tried many ways for eclipse to use my existing folder as a project and recognize the CVS data in the CVS folders. I have done the following options: Created a new project, selected existing source, located my project folder and clicked OK to finish creating. In the end the CVS files weren't automatically read. Did the same as above and after project creation I wen to the option "project menu-team-share project", it asks me to choose a repository and doesn't automatically find the CVS information in the subfolders. If your wondering I have set-up both repositories in my eclipse and can browse the repositories through the CVS browser. My project directory layout is like this: +-Project Folder (no CVS folder at this level) +---Repo A folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo A + +---Repo B folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo B + +-(couple of random files, not in CVS) Thanks for the help

    Read the article

  • Push TFS 2008 code to remote VSS over VPN?

    - by drovani
    We have a local Team Foundation Server 2008 that we keep our code under version control. However, we also have a paranoid client that has their own Visual Source Safe installation that wants us to keep a running copy of the code on their server as well. As such, I'm hoping there is a way I can just do a nightly push from our TFS repository to their VSS repository. I'm not concerned about keeping each changeset on TFS as a different changeset on the VSS, just a once-nightly push that creates a new changeset on the VSS and uploads the latest changeset from TFS. I guess the first part is if it is even possible for TFS to push an update to VSS. I've noticed that most replies to this question have been something to the tune of "don't do it", but I can't find anything that specifically states that it cannot be done. The second part would then be automating the process by having the TFS server connect to the client's VPN, then push the code changes. I have full control over the TFS server and I can customize the VSS install, if there are settings that need changing, but I'm limited on what I can do about settings on either firewall or server specific settings on the client's VSS server.

    Read the article

  • Delete ONE SPECIFIC table of a database - leave the rest intact

    - by Jayomat
    Hi, I have a database where I store two different kinds of data. One table is for favorite routes, the other stores the retrieved routes from a server. I can retrieve the routes etc just fine. But after retrieving the first Route, pressing back or HOME, and then retrieving another route, the routes table is filled with all the old routes plus the new ones. So my question: how do I delete ONLY the routes table and not the whole database because I don't want to delete the added favorites....?! I found the following function in the android docs: public int delete (String table, String whereClause, String[] whereArgs) and I tried to implement it, but I must pass a SQLiteDataBase as an argument. But how? I implemented: public void deleteTableRoutes(SQLiteDataBase db){ db.delete("routes", null, null); } But I want to call this function from a different class where I have no reference to the database.. so what do I have to pass as an argument? Or how do I get a reference to my database? I build my database upon the code example of the NotePadExample from the dev docs. How to solve this problem? thanks

    Read the article

  • Getting broken link error whle Using App Engine service accounts

    - by jade
    I'm following this tutorial https://developers.google.com/bigquery/docs/authorization#service-accounts-appengine Here is my main.py code import httplib2 from apiclient.discovery import build from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app from oauth2client.appengine import AppAssertionCredentials # BigQuery API Settings SCOPE = 'https://www.googleapis.com/auth/bigquery' PROJECT_NUMBER = 'XXXXXXXXXX' # REPLACE WITH YOUR Project ID # Create a new API service for interacting with BigQuery credentials = AppAssertionCredentials(scope=SCOPE) http = credentials.authorize(httplib2.Http()) bigquery_service = build('bigquery', 'v2', http=http) class ListDatasets(webapp.RequestHandler): def get(self): datasets = bigquery_service.datasets() listReply = datasets.list(projectId=PROJECT_NUMBER).execute() self.response.out.write('Dataset list:') self.response.out.write(listReply) application = webapp.WSGIApplication( [('/listdatasets(.*)', ListDatasets)], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main() Here is my app.yaml file code application: bigquerymashup version: 1 runtime: python api_version: 1 handlers: - url: /favicon\.ico static_files: favicon.ico upload: favicon\.ico - url: .* script: main.py And yes i have added app engine service account name in google api console Team tab with can edit permissions. When upload the app and try to access the link it says Oops! This link appears to be broken. Ealier i ran this locally and tried to access it using link localhost:8080.Then i thought may be running locally might be giving the error so i uploaded my code to http://bigquerymashup.appspot.com/ but still its giving error.

    Read the article

  • Python virtualenv questions

    - by orokusaki
    I'm using VirtualEnv on Windows XP. I'm wondering if I have my brain wrapped around it correctly. I ran virtualenv ENV and it created C:\WINDOWS\system32\ENV. I then changed my PATH variable to include C:\WINDOWS\system32\ENV\Scripts instead of C:\Python27\Scripts. Then, I checked out Django into C:\WINDOWS\system32\ENV\Lib\site-packages\django-trunk, updated my PYTHON_PATH variable to point the new Django directory, and continued to easy_install other things (which of course go into my new C:\WINDOWS\system32\ENV\Lib\site-packages directory). I understand why I should use VirtualEnv so I can run multiple versions of Django, and other libraries on the same machine, but does this mean that to switch between environments I have to basically change my PATH and PYTHON_PATH variable? So, I go from developing one Django project which uses Django 1.2 in an environment called ENV and then change my PATH and such so that I can use an environment called ENV2 which has the dev version of Django? Is that basically it, or is there some better way to automatically do all this (I could update my path in Python code, but that would require me to write machine-specific code in my application)? Also, how does this process compare to using VirtualEnv on Linux (I'm quite the beginner at Linux).

    Read the article

  • Detect how many times the users have click the button...

    - by Jerry
    Hello guys. Just want to know if there is a way to detect how many times a user has clicked a button by using Jquery. My main application has a button that can add input fields depend on the users. He/She can adds as many input fields as they need. When they submit the form, The add page will add the data to my database. My current idea is to create a hidden input field and set the value to zero. Every time a user clicks the button, jquery would update the attribute of the hidden input field value. Then the "add page" can detect the loop time. See the example below. I just want to know if there are better practices to do this. Thanks for the helps. main page <form method='post' action='add.php'> //omit <input type="hidden" id="add" name="add" value="0"/> <input type="button" id="addMatch" value="Add a match"/> //omit </form> jquery $(document).ready(function(){ var a =0; $("#addMatch").live('click', function(){ $('#table').append("<input name='match"+a+"Name' />") //the input field will append //as many as the user wants. a++; $('#add').attr('value', 'a'); //pass the a value to hidden input field return false; }); Add Page $a=$_POST['a']; // for($k=0;$k<$a;$k++){ //get all matchName input field $matchName=$_POST['match'.$k.'Name']; //insert the match $updateQuery=mysql_query("INSERT INTO game (team) values('$matchName')",$connection); if(!$updateQuery){ DIE('mysql Error:'+mysql_error()); }

    Read the article

  • JTA or LOCAL transactions in JPA2+Hibernate 3.6.0?

    - by Pangea
    We are in the process of re-thinking our tech stack and below are our choices (We can't live without Spring and Hibernate due to the complexity etc of the app). We are also moving from J2EE 1.4 to JEE 5. Tech stack JEE 5 JPA 2.0 (I know JEE 5 only supports JPA 1.0 but we want to use Hibernate as the JPA provider) Hibernate 3.6.0 (We already have lots of hbm files with custom types etc. so we doesn't want to migrate them at this time to JPA. This means we want both jpa/hbm mappings work together and hence the Hibernate as the JPA provider instead of using the default that comes with App Server) Now the problems is that I want to stick with local transactions but other team members want to use JTA. I have been working with J2EE for last 9 years and I've heard time and again people suggesting to stick with local transactions if I doesn't need two phase commits. This is not only for performance reasons but debugging/troubleshooting a local transaction is lot easier than a distributed transaction. My suggestion is to use spring declarative transaction management + local transactions (HibernateTransactionManager) I want to make sure if I am being paranoid or I have a valid point. I'd like to hear what the rest of the JEE world thinks. Thank you.

    Read the article

< Previous Page | 563 564 565 566 567 568 569 570 571 572 573 574  | Next Page >