Search Results

Search found 5413 results on 217 pages for 'git pull'.

Page 65/217 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Pull dynamic string out of embedded RESX

    - by AC
    I have a RESX resource file that I can retrieve the values out of using the strongly typed properties and assign to values in my Silverlight 3 app. However at runtime I need to generate the lookup key based on some values to avoid a ton of if/switch statements. I'm trying to use the ResourceManager, but it doesn't like .resx files. Searched far and wide... is this just not possible in SL3? Does anyone have another suggestion?

    Read the article

  • Pull info from Excel Spreadsheet into C# Web Application

    - by Gene
    I want to show information from an excel spreadsheet in a web page. The xls will have maybe 5 columns, and as many rows as needed. The columns will be 'Name', 'Created Date', 'Expired Date', 'Owner', and maybe down the road maybe more. What I would like is for the C# to nightly read the xls, and display the information in a table on the website. If there is a PDF available for download I am going to link that as well. Thanks in advance, Gene

    Read the article

  • Data access pattern, combining push and pull?

    - by andlju
    I need some advice on what kind of pattern(s) I should use for pushing/pulling data into my application. I'm writing a rule-engine that needs to hold quite a large amount of data in-memory in order to be efficient enough. I have some rather conflicting requirements; It is not acceptable for the engine to always have to wait for a full pre-load of all data before it is functional. Only fetching and caching data on-demand will lead to the engine taking too long before it is running quickly enough. An external event can trigger the need for specific parts of the data to be reloaded. Basically, I think I need a combination of pushing and pulling data into the application. A simplified version of my current "pattern" looks like this (in psuedo-C# written in notepad): // This interface is implemented by all classes that needs the data interface IDataSubscriber { void RegisterData(Entity data); } // This interface is implemented by the data access class interface IDataProvider { void EnsureLoaded(Key dataKey); void RegisterSubscriber(IDataSubscriber subscriber); } class MyClassThatNeedsData : IDataSubscriber { IDataProvider _provider; MyClassThatNeedsData(IDataProvider provider) { _provider = provider; _provider.RegisterSubscriber(this); } public void RegisterData(Entity data) { // Save data for later StoreDataInCache(data); } void UseData(Key key) { // Make sure that the data has been stored in cache _provider.EnsureLoaded(key); Entity data = GetDataFromCache(key); } } class MyDataProvider : IDataProvider { List<IDataSubscriber> _subscribers; // Make sure that the data for key has been loaded to all subscribers public void EnsureLoaded(Key key) { if (HasKeyBeenMarkedAsLoaded(key)) return; PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } // Force all subscribers to get a new version of the data for key public void ForceReload(Key key) { PublishDataToSubscribers(key); MarkKeyAsLoaded(key); } void PublishDataToSubscribers(Key key) { Entity data = FetchDataFromStore(key); foreach(var subscriber in _subscribers) { subscriber.RegisterData(data); } } } // This class will be spun off on startup and should make sure that all data is // preloaded as quickly as possible class MyPreloadingThread { IDataProvider _provider; MyPreloadingThread(IDataProvider provider) { _provider = provider; } void RunInBackground() { IEnumerable<Key> allKeys = GetAllKeys(); foreach(var key in allKeys) { _provider.EnsureLoaded(key); } } } I have a feeling though that this is not necessarily the best way of doing this.. Just the fact that explaining it seems to take two pages feels like an indication.. Any ideas? Any patterns out there I should have a look at?

    Read the article

  • How to pull data from a MySQL column and put it into a single array

    - by Rob
    Basically, this is what I currently use in an included file: $sites[0]['url'] = "http://example0.com"; $sites[1]['url'] = "http://example1.com"; $sites[2]['url'] = "http://example2.com"; $sites[3]['url'] = "http://example3.com"; $sites[4]['url'] = "http://example4.com"; $sites[5]['url'] = "http://example5.com"; And so I output it like so: foreach($sites as $s) But I want to make it easier to manage via a MySQL database. So my question is, how can I make it automatically add additional "$sites[x]['url'] = "http://examplex.com";" and output it appropriately from my MySQL table?

    Read the article

  • Determine if a url matches a Route, and pull out the terms if it does

    - by Kevin Montrose
    I've got a big old log file I'm trying to break down in terms of routes. Essentially, I'm getting input of a path (/questions/31415 for example) and a list of all the registered Routes. What I want out is a Route and the parameters specified in the route (so in, /questions/{id}/{answer} I'd get id and answers out). I've got a working solution that basically generates a nasty bit of regex on the fly with named groups to do matching and parsing all-in-one. My gut tells me this is a brittle way to do it, and frankly there has to be a better way, right?

    Read the article

  • Building MobileVLCKit from git.videolan.org repository on macOsX with XCode

    - by ztepsic
    I would like to make an application for iOS(iPhone and iPad) that can play streaming videos through RTSP protocol (that includes mms). I imagined to achive a specified application using VLC player or libVLC library. On the official vlc git repository (http://git.videolan.org/?p=vlc.git;a=tree) in projects/macosx/framework/ folder there is xcode project MobileVLCKit.xcodeproj for which I assume that is a somewhat usable VLC framework for iOS. Now the problem is that I can't/don't know how to build this project. When I try to build MobileVLCKit.xcodeproj I get an error that says it can't find files inside extras/contrib/hosts/i686-apple-darwin10/ios/ folder. I have looked within that folder (extras/contrib) and managed to create folder (with files) extras/contrib/hosts/i686-apple-darwin10/ with make, but there is no ios folder. So, does anybody knows how to successfully build MobileVLCKit?

    Read the article

  • using araxis merge for folder comparison on git branches (OSX)

    - by memo
    I know how to setup araxis merge to be my git diff / merge tool, so if I do git difftool it automatically launches araxis merge. However if I do git difftool upstream/master (to see all the differences between current branch and upstream/master), it launches the app one by one for every single file that is different. Is there a way of setting it up so I can get a folder comparison type view, and then go down and view each file diff as I choose? i.e. similar to this http://www.araxis.com/merge_mac/overview2.html The only way I've found to do this is to clone my repo into a new folder, switch to the branch there, and then do a normal araxis merge folder comparison.

    Read the article

  • Push or Pull to Excel for reporting data

    - by Nathan Fisher
    I am unsure which is the best way to go here. I have a third party Excel 2003 spreadsheet that needs to be filled in on a monthly basis and emailed. Currently it is a manual process and I am in the process of automating the generation of the spreadsheet. I have been throwing around different ideas of how to get the data into the spreadsheet. I have thought of using SSRS to create a report that is in a similar format and get the user to cut and past. Alternatively writing a VBA addin that retrieves that data from a webservice and then adds the data to the spreadsheet. Or using the third party spreadsheet as a template and open it on the server via oledb and adding the data then serving it as a downloadable file. Which is better or are the better solutions out there?

    Read the article

  • Javascript to pull formatted data

    - by Dusty Roberts
    Hi there I have a recruitment portal that people can use to advertise and search for jobs. I would like the recruiters to be able to add a small javascript snippet to their personal websites, that will list jobs on my site. how can i go about this? I have webservices set up so the javascript can just call that, but i also need the result to be formatted and placed inline. This should work in a simular way to google adsense. I would really appreciate a small example

    Read the article

  • Cannot pull newly-added top-level directory into sparsely-checked out SVN repository

    - by Tim Keating
    Our SVN repository is quite large, and pulling the whole thing takes some time. When checking out at home, I was pleased to discover the sparse checkout feature; I checked out the whole repository to a depth of 1, then pulled each top-level directory (directly under the trunk) that I needed to a depth of infinity. Until now this has been brilliant. Recently I added a new directory under trunk. When I do a svn up, I get nothing. The TLD I added will not sync. I normally use Tortoise SVN, so I tried doing this from command line. I tried explicitly specifying the name of the directory, adding --depth infinity, adding --force. None of these tricks has worked. What am I missing?

    Read the article

  • Nerd Dinner tutorial question (can't pull data off Model)

    - by Alexander
    I am just following the tutorial about NerdDinner and was stuck because when I am implementing the "Index" View Template I got no data from it. It seems that when I loop around in the code below: <asp:Content ID="Title" ContentPlaceHolderID="TitleContent" runat="server"> Upcoming Dinners </asp:Content> <asp:Content ID="Main" ContentPlaceHolderID="MainContent" runat="server"> <h2>Upcoming Dinners</h2> <ul> <% foreach (var dinner in Model) { %> <li> <%=Html.ActionLink(dinner.Title, "Details", new { id=dinner.DinnerID }) %> on <%=Html.Encode(dinner.EventDate.ToShortDateString())%> @ <%=Html.Encode(dinner.EventDate.ToShortTimeString())%> </li> <% } %> </ul> </asp:Content> There seems to be nothing in the Model... why is this? I am pretty sure there's data in the database because the details view template worked. FYI I am following the tutorial up to this page

    Read the article

  • Pull datasets from DB and manipulate into separate arrays

    - by dresdin
    My DB has four fields, date:date, morning:integer, afternoon:integer, evening:integer. Google charts needs the data split into one array per dataset (example below). What's the best way to do this? ['Day', 'Morning', 'Afternoon', 'Evening'], ['06/27/13', 1000, 400 234], ['06/28/13', 1170, 460 275], ['06/29/13', 660, 1120 377], ['06/30/13', 1030, 540 934] I've tried this without much luck: var data = google.visualization.arrayToDataTable([ <% @todo.each do |t| %> ['Day', 'Morning', 'Afternoon', 'Evening'], [<%= t.date.strftime("%m/%d/%y") %>, <%= t.morning %>, <%= t.afternoon %>, <%= t.evening %>] <% end %> ]);

    Read the article

  • Pull specific information from a long list with Perl

    - by melignus
    The file that I've got to work with here is the result of an LDAP extraction but I need to ultimately get the information formatted over to something that a spreadsheet can use. So, the data is as follows: DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: John Doe name: ##userName DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: Jane Doe name: ##userName DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData DataDataDataDataDataDataDataDataDataDataDataDataDataDataDataData displayName: Ted Doe name: ##userName The format that I need to export to is: firstName lastName userName firstName lastName userName firstName lastName userName Where the spaces are tabs so I can then impor that file into a database. I have experience doing this in VBScript but I'm trying to switch over to using Perl for as much server administration as possible. I'm not sure on the syntax for what I want which is basically while not endoffile{ detect "displayName: " & $firstName & " " & $lastName detect "name: ##" & $userName write $firstName tab $lastName tab $userName to file } Also if someone could point me to a resource specifically on the text parsing syntax that Perl uses, I'd be very grateful. Most of the resources that I've come across haven't been very helpful.

    Read the article

  • using jQuery $get to pull data

    - by sadhu_
    Hi, I am trying to send a date to the server as part of a get transaction, however, looking at the urls (through firebug), it appears that the var sDate is never replaced with its value. Sorry i'm very new to JS and Jquery, so this is likely a very elementary mistake - i'm guessing it has something to do with the map that i am trying to pass to $.get? function drawVisualization(sDate, eDate) { alert(sDate + eDate); $.get( "http://localhost:8080/", "{'start':sDate}", function(data) { alert(data); }, "html" );

    Read the article

  • alter prettyphoto to pull from div or span, not title

    - by Jason
    Here's the section of code from prettyphoto.js that I need to alter. What I am trying to achieve is have each photo have a following hidden div or span that holds the description for the photo. This way I can include html in the description. Possible? Thanks! var images = new Array(), titles = new Array(), descriptions = new Array(); if(theGallery){ $('a[rel*='+theGallery+']').each(function(i){ if($(this)[0] === $(_self)[0]) setPosition = i; // Get the position in the set images.push($(this).attr('href')); titles.push($(this).find('img').attr('alt')); descriptions.push($(this).attr('title')); }); }else{ images = $(this).attr('href'); titles = ($(this).find('img').attr('alt')) ? $(this).find('img').attr('alt') : ''; descriptions = ($(this).attr('title')) ? $(this).attr('title') : ''; } $.prettyPhoto.open(images,titles,descriptions); return false; });

    Read the article

  • JSF 2 Content Controller (pull in content based on URI)

    - by gerges
    Hey All, I'm new to JSF and am trying to make a content controller. Basically whenever someone makes a request to www.myapp.com/external/** I'd like to forward to a controller that pulls external content into a page template and spits it out to the user. I was able to achive this pretty easily in Spring 3, but I'm a little confused on where to start with JSF. I feel like I'd need to create a custom servlet to handle /external/**? But what would the class of this servlet be? What would it consist of? Any help is appreciated!

    Read the article

  • yum trying to install el5 when I am on el6

    - by giorgio79
    When I run the following yum command I get this error: Package: git-1.7.10.1-1.el5.rf.x86_64 (rpmforge) Requires: libcurl.so.3()(64bit)" I read that this error is due to running an el5 rpmforge or having some el5 installed packages. How can I solve this problem? $ yum install git Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.kiewel-online.ch * epel: fedora.kiewel-online.ch * extras: centos.kiewel-online.ch * rpmforge: mirror.de.leaseweb.net * updates: centos.kiewel-online.ch Setting up Install Process Resolving Dependencies --> Running transaction check ---> Package git.x86_64 0:1.7.10.1-1.el5.rf will be installed --> Processing Dependency: perl-Git = 1.7.10.1-1.el5.rf for package: git-1.7.10.1-1.el5.rf.x86_64 --> Processing Dependency: perl(Git) for package: git-1.7.10.1-1.el5.rf.x86_64 --> Processing Dependency: libexpat.so.0()(64bit) for package: git-1.7.10.1-1.el5.rf.x86_64 --> Processing Dependency: libcurl.so.3()(64bit) for package: git-1.7.10.1-1.el5.rf.x86_64 --> Running transaction check ---> Package compat-expat1.x86_64 0:1.95.8-8.el6 will be installed ---> Package git.x86_64 0:1.7.10.1-1.el5.rf will be installed --> Processing Dependency: libcurl.so.3()(64bit) for package: git-1.7.10.1-1.el5.rf.x86_64 ---> Package perl-Git.x86_64 0:1.7.10.1-1.el5.rf will be installed --> Finished Dependency Resolution Error: Package: git-1.7.10.1-1.el5.rf.x86_64 (rpmforge) Requires: libcurl.so.3()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest

    Read the article

  • SQL Server: pulling and updating local data

    - by SDReyes
    Hi guys, I have two SQL Server 2008 databases called Anna and Bob. Bob has to pull and transform data from Anna to keep updated his tables. Ideally Bob will be always synchronized with Anna, but some delay would be acceptable. What is the best way to do this? Thanks in advance.

    Read the article

  • Should I create separate Work and Personal Github accounts?

    - by Almost Surely
    I'm fairly new to programming, and I've been working on many personal projects, which I'm concerned can come across as silly/unprofessional. The kind of projects I have are a Reddit Image Downloader and a tool for GM's to use in roleplaying games. I want to start building up a Github for projects in my chosen field of Data Analytics, but I'm not sure how to orgaqnize projects on my Github account. Should I create a "Professional" Github, mainly containing different analytical scripts and have a separate "Personal" account for fun little projects of mine? Or am I just overthinking this and should I just maintain account?

    Read the article

  • Best practices for team workflow with RoR/Github for designer + coder?

    - by Josh
    My friend and I have started to try to collaborate on some projects. For background, I come from a PHP/Wordpress/Drupal coding background, but recently I've become more experienced with the RoR framework, while he is more experienced as an HTML/CSS designer, working with PHP and WordPress. We're both relatively new to RoR I think, and so we're trying to figure out our collaborative workflow, but we have no idea where to start. For instance, we were trying to figure out how he could do some minor edits to the CSS file without having to do a full RoR deploy on his box. We still haven't figured out a solution, so I think it's best if we start to set some sort of workflow based on best practices. I was wondering if you guys have any insight or links to articles/case studies regarding this topic?

    Read the article

  • CI Deployment Of Azure Web Roles Using TeamCity

    - by srkirkland
    After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process. Step 0: Configuring your Azure Project in Visual Studio Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows: Install the prerequisite Windows Azure SDK Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type) Configure your Role and Service Configuration/Definition as desired Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process. Understanding the Azure Publish Process The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files: ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information. ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files. When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target: msbuild.exe /target:Publish Windows Azure PowerShell Cmdlets The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention. Preparing TeamCity for Azure Deployments Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets. Installing the Azure SDK is easy enough, just go to https://www.windowsazure.com/en-us/develop/net/ and click “Install” Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this: Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed. With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets: Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script. Scripting the CI Deploy Process Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm: Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/ Import the Windows Azure PowerShell Cmdlets Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398): $subscription = "[Your Subscription Name]" $service = "[Your Azure Service Name]" $slot = "staging" #staging or production $package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg" $configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg" $timeStampFormat = "g" $deploymentLabel = "ContinuousDeploy to $service v%build.number%"   Write-Output "Running Azure Imports" Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1" Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings" Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription   function Publish(){ $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue   if ($a[0] -ne $null) { Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. " } if ($deployment.Name -ne $null) { #Update deployment inplace (usually faster, cheaper, won't destroy VIP) Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename. Upgrading deployment." UpgradeDeployment } else { CreateNewDeployment } }   function CreateNewDeployment() { write-progress -id 3 -activity "Creating New Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"   $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID" }   function UpgradeDeployment() { write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"   # perform Update-Deployment $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force   $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot $completeDeploymentID = $completeDeployment.deploymentid   write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete" Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID" }   Write-Output "Create Azure Deployment" Publish   Creating the TeamCity Build Step The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:   Wrap Up After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests. In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired. After your TeamCity Build Configuration is complete, you’ll see something that looks like this: Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure! One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.   Enjoy!

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >