Search Results

Search found 18154 results on 727 pages for 'track changes'.

Page 67/727 | < Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >

  • A non-ugly way to persist dialog contents across orientation changes?

    - by alex
    So I have a managed AlertDialog with a number of EditTexts & separate portrait + landscape layouts. The user opens the dialog in portrait mode, enters some text and then all of a sudden decides to pop out the keyboard. Now, I need to remove the dialog and recreate it for the changed layout persisting the text entered. What I can think of is getting references to the EdiTexts in onPrepareDialog(..), then getting the actual text in onConfigurationChanged(..), then removing the dialog, then showing it again, then populating it. Rather ugly, huh? Are there any better options?

    Read the article

  • Dynamically created textboxes and changes plus jQuery in ASP.NET?

    - by gazeebo
    Hi all, I was wondering how to read off a value from a textbox that resides in a partialview and output the value into a textbox within the initial window. Here's my code... <script type="text/javascript"> $(document).ready(function (e) { // Calculate the sum when the document has been loaded. var total = 0; $("#fieldValues :input.fieldKronor").each(function (e) { total += Number($(this).val()); }); // Set the value to the correspondent textbox $("#fieldSummation").text(total); // Re-calculate on change $("#fieldValues :input.fieldKronor").change(function (e) { var total = 0; $("#fieldValues :input.fieldKronor").each(function (e) { total += Number($(this).val()); }); $("#fieldSummation").text(total); }); }); </script> Here's the table where in info is... <table id="fieldValues" style="width: 60%; margin-bottom: 2em"> <thead> <tr> <th>Rubrik, t.ex. teknik*</th> <th>Kronor (ange endast siffror)*</th> </tr> </thead> <asp:Panel ID="pnlStaffRows" runat="server"></asp:Panel> <tfoot> <tr> <th></th> <th>Total kostnad</th> </tr> <tr> <td></td> <td><input type="text" value="" class="fieldSummation" style="width:120px" /></td> </tr> </tfoot> </table> And here's the partialview... <tr> <td class="greyboxchildsocialsecuritynumberheading4" style="padding-bottom:1em"> <asp:TextBox ID="txtRubrikBox" ToolTip="Rubrik" runat="server" Width="120"></asp:TextBox> </td> <td class="greyboxchildnameheading3" style="padding-bottom:1em"> <asp:TextBox ID="txtKronorBox" class="fieldKronor" ToolTip="Kronor" runat="server" Width="120"></asp:TextBox> </td> </tr>

    Read the article

  • Which kind of changes can't I do with lightweight migration in Core Data?

    - by dontWatchMyProfile
    I recently tried a lot of different stuff with lightweight migration. These all work: 1) Rename attributes (with renaming identifier specified) 2) Add attributes 3) Add new entity + new attribute + inverse relationship to an already existing entity 4) remove existing entity + relationships to that entity = It almost looks like just about anything can be handled with LM. Did I miss something? In which cases am I getting into trouble and need an some more complex approach?

    Read the article

  • Mac OS X: getting file system changes of the last minute?

    - by Patrick
    From older times (Mac OS 10.4) I had found this command line on the web somewhere: mdfind '(kMDItemFSContentChangeDate >= $time.now(-60)) && (kMDItemFSContentChangeDate <= $time.now)' it gave me a list of files that where changed in the last minute. This does not work anymore on Mac OS 10.6. Can anybody explain why this doesn't work? And even suggest a working command line?

    Read the article

  • ListAdapter --> How apply convert view changes to specific view item only?

    - by user1847544
    I am trying to get have the lower part of list view slide down, by hiding an unhiding linear layout in list_item. The problem is the view seems to get reused in LayoutAdapter so that the change does not just effect the view I intended to apply it to. Instead it shows up wherever the view is reused. How can I restrict the drop down to just the view on which I requested the dropdown? By drop down I mean unhide the linear layout.

    Read the article

  • Is there a tool that automatically saves incremental changes to files while coding?

    - by Bob.
    One of my favorite features of Google docs is the fact that it's constantly automatically saving versions of my document as I work. This means that even if I forget to save at a certain point before making a critical change there's a good chance that a save point has been created automatically. At the very least, I can return the document to a state prior to the mistaken change and continue working from that point. Is there a tool with an equivalent feature for a Ruby coder running on Mac OS (or UNIX)? For example, a tool that will do an automatic Git check-in every couple of minutes to my local repository for the files I'm working on. Maybe I'm paranoid, but this small bit of insurance could put my mind at ease during my day-to-day work.

    Read the article

  • How do you deal with breaking changes in a Rails migration?

    - by Adam Lassek
    Let's say I'm starting out with this model: class Location < ActiveRecord::Base attr_accessible :company_name, :location_name end Now I want to refactor one of the values into an associated model. class CreateCompanies < ActiveRecord::Migration def self.up create_table :companies do |t| t.string :name, :null => false t.timestamps end add_column :locations, :company_id, :integer, :null => false end def self.down drop_table :companies remove_column :locations, :company_id end end class Location < ActiveRecord::Base attr_accessible :location_name belongs_to :company end class Company < ActiveRecord::Base has_many :locations end This all works fine during development, since I'm doing everything a step at a time; but if I try deploying this to my staging environment, I run into trouble. The problem is that since my code has already changed to reflect the migration, it causes the environment to crash when it attempts to run the migration. Has anyone else dealt with this problem? Am I resigned to splitting my deployment up into multiple steps?

    Read the article

  • SQL SERVER – ?Finding Out What Changed in a Deleted Database – Notes from the Field #041

    - by Pinal Dave
    [Note from Pinal]: This is a 41th episode of Notes from the Field series. The real world is full of challenges. When we are reading theory or book, we sometimes do not realize how real world reacts works and that is why we have the series notes from the field, which is extremely popular with developers and DBA. Let us talk about interesting problem of how to figure out what has changed in the DELETED database. Well, you think I am just throwing the words but in reality this kind of problems are making our DBA’s life interesting and in this blog post we have amazing story from Brian Kelley about the same subject. In this episode of the Notes from the Field series database expert Brian Kelley explains a how to find out what has changed in deleted database. Read the experience of Brian in his own words. Sometimes, one of the hardest questions to answer is, “What changed?” A similar question is, “Did anything change other than what we expected to change?” The First Place to Check – Schema Changes History Report: Pinal has recently written on the Schema Changes History report and its requirement for the Default Trace to be enabled. This is always the first place I look when I am trying to answer these questions. There are a couple of obvious limitations with the Schema Changes History report. First, while it reports what changed, when it changed, and who changed it, other than the base DDL operation (CREATE, ALTER, DELETE), it does not present what the changes actually were. This is not something covered by the default trace. Second, the default trace has a fixed size. When it hits that size, the changes begin to overwrite. As a result, if you wait too long, especially on a busy database server, you may find your changes rolled off. But the Database Has Been Deleted! Pinal cited another issue, and that’s the inability to run the Schema Changes History report if the database has been dropped. Thankfully, all is not lost. One thing to remember is that the Schema Changes History report is ultimately driven by the Default Trace. As you may have guess, it’s a trace, like any other database trace. And the Default Trace does write to disk. The trace files are written to the defined LOG directory for that SQL Server instance and have a prefix of log_: Therefore, you can read the trace files like any other. Tip: Copy the files to a working directory. Otherwise, you may occasionally receive a file in use error. With the Default Trace files, if you ask the question early enough, you can see the information for a deleted database just the same as any other database. Testing with a Deleted Database: Here’s a short script that will create a database, create a schema, create an object, and then drop the database. Without the database, you can’t do a standard Schema Changes History report. CREATE DATABASE DeleteMe; GO USE DeleteMe; GO CREATE SCHEMA Test AUTHORIZATION dbo; GO CREATE TABLE Test.Foo (FooID INT); GO USE MASTER; GO DROP DATABASE DeleteMe; GO This sets up the perfect situation where we can’t retrieve the information using the Schema Changes History report but where it’s still available. Finding the Information: I’ve sorted the columns so I can see the Event Subclass, the Start Time, the Database Name, the Object Name, and the Object Type at the front, but otherwise, I’m just looking at the trace files using SQL Profiler. As you can see, the information is definitely there: Therefore, even in the case of a dropped/deleted database, you can still determine who did what and when. You can even determine who dropped the database (loginame is captured). The key is to get the default trace files in a timely manner in order to extract the information. If you want to get started with performance tuning and database security with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Upgrading to Code Based Migrations EF 4.3.1 with Connector/Net 6.6

    - by GABMARTINEZ
    Entity Framework 4.3.1 includes a new feature called code first migrations.  We are adding support for this feature in our upcoming 6.6 release of Connector/Net.  In this walk-through we'll see the workflow of code-based migrations when you have an existing application and you would like to upgrade to this EF 4.3.1 version and use this approach, so you can keep track of the changes that you do to your database.   The first thing we need to do is add the new Entity Framework 4.3.1 package to our application. This should via the NuGet package manager.  You can read more about why EF is not part of the .NET framework here. Adding EF 4.3.1 to our existing application  Inside VS 2010 go to Tools -> Library Package Manager -> Package Manager Console, this will open the Power Shell Host Window where we can work with all the EF commands. In order to install this library to your existing application you should type Install-Package EntityFramework This will make some changes to your application. So Let's check them. In your .config file you'll see a  <configSections> which contains the version you have from EntityFramework and also was added the <entityFramework> section as shown below. This section is by default configured to use SQL Express which won't be necesary for this case. So you can comment it out or leave it empty. Also please make sure you're using the Connector/Net 6.6.x version which is the one that has this support as is shown in the previous image. At this point we face one issue; in order to be able to work with Migrations we need the __MigrationHistory table that we don't have yet since our Database was created with an older version. This table is used to keep track of the changes in our model. So we need to get it in our existing Database. Getting a Migration-History table into an existing database First thing we need to do to enable migrations in our existing application is to create our configuration class which will set up the MySqlClient Provider as our SQL Generator. So we have to add it with the following code: using System.Data.Entity.Migrations;     //add this at the top of your cs file public class Configuration : DbMigrationsConfiguration<NameOfYourDbContext>  //Make sure to use the name of your existing DBContext { public Configuration() { this.AutomaticMigrationsEnabled = false; //Set Automatic migrations to false since we'll be applying the migrations manually for this case. SetSqlGenerator("MySql.Data.MySqlClient", new MySql.Data.Entity.MySqlMigrationSqlGenerator());     }   }  This code will set up our configuration that we'll be using when executing all the migrations for our application. Once we have done this we can Build our application so we can check that everything is fine. Creating our Initial Migration Now let's add our Initial Migration. In Package Manager Console, execute "add-migration InitialCreate", you can use any other name but I like to set this as our initial create for future reference. After we run this command, some changes were done in our application: A new Migrations Folder was created. A new class migration call InitialCreate which in most of the cases should have empty Up and Down methods as long as your database is up to date with your Model. Since all your entities already exists, delete all duplicated code to create any entity which exists already in your Database if there is any. I found this easier when you don't have any pending updates to do to your database. Now we have our empty migration that will make no changes in our database and represents how are all the things at the begining of our migrations.  Finally, let's create our MigrationsHistory table. Optionally you can add SQL code to delete the edmdata table which is not needed anymore. public override void Up() { // Just make sure that you used 4.1 or later version         Sql("DROP TABLE EdmMetadata"); } From our Package Manager Console let's type: Update-database; If you like to see the operations made on each Update-database command you can use the flag -verbose after the Update-database. This will make two important changes.  It will execute the Up method in the initial migration which has no changes in the database. And second, and very important,  it will create the __MigrationHistory table necessary to keep track of your changes. And next time you make a change to your database it will compare the current model to the one stored in the Model Column of this table. Conclusion The important thing of this walk through is that we must create our initial migration before we start doing any changes to our model. This way we'll be adding the necessary __MigrationsHistory table to our existing database, so we can keep our database up to date with all the changes we do in our context model using migrations. Hope you have found this information useful. Please let us know if you have any questions or comments, also please check our forums here where we keep answering questions in general for the community.  Happy MySQL/Net Coding!

    Read the article

  • WPF: adding Style to a slider

    - by user279244
    Hi I am using a Slider and putting a style element over it as follows... But however, I am not able to figure out why the style is not getting reflected. The RepeatButtons are not still visible. Thanks in advance <ResourceDictionary> <LinearGradientBrush x:Key="Stroke_Gradient" EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FF6E6E6E" Offset="0"/> <GradientStop Color="#FFFFFFFF" Offset="0.496"/> <GradientStop Color="#FF6E6E6E" Offset="1"/> </LinearGradientBrush> <Style x:Key="ScrollBar_RepeatButtonStyle1" d:IsControlPart="True" TargetType="{x:Type RepeatButton}"> <Setter Property="Background" Value="#FF6E6E6E"/> <Setter Property="BorderBrush" Value="#FFFFFFFF"/> <Setter Property="IsTabStop" Value="false"/> <Setter Property="Focusable" Value="false"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type RepeatButton}"> <Grid> <Rectangle Fill="{TemplateBinding Background}" Stroke="{TemplateBinding BorderBrush}" StrokeThickness="{TemplateBinding BorderThickness}"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> <ImageBrush x:Key="zoomBkgrnd" TileMode="None" ImageSource="zoombg.png" Stretch="Uniform"/> <Style x:Key="{x:Type Slider}" TargetType="{x:Type Slider}"> <Setter Property="Background" Value="{StaticResource zoomBkgrnd}"/> <Setter Property="BorderBrush" Value="{StaticResource zoomBkgrnd}"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Slider}"> <Grid x:Name="GridRoot"> <Grid.RowDefinitions> <RowDefinition Height="Auto"/> <RowDefinition Height="Auto" /> <RowDefinition Height="Auto"/> </Grid.RowDefinitions> <!-- TickBar shows the ticks for Slider --> <TickBar Visibility="Collapsed" x:Name="TopTick" Height="4" SnapsToDevicePixels="True" Placement="Top" Fill="{StaticResource zoomBkgrnd}"/> <Border Grid.Row="1" Margin="0" x:Name="Border" Height="4" Background="{StaticResource zoomBkgrnd}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}" CornerRadius="2"/> <!-- The Track lays out the repeat buttons and thumb --> <Track Grid.Row="1" x:Name="PART_Track"> <Track.Thumb> <Thumb Width="10" Height="20" /> </Track.Thumb> <Track.IncreaseRepeatButton> <RepeatButton Style="{DynamicResource ScrollBar_RepeatButtonStyle1}" Command="Slider.IncreaseLarge"/> </Track.IncreaseRepeatButton> <Track.DecreaseRepeatButton> <RepeatButton Style="{DynamicResource ScrollBar_RepeatButtonStyle1}" Command="Slider.DecreaseLarge"/> </Track.DecreaseRepeatButton> </Track> <TickBar Visibility="Collapsed" Grid.Row="2" x:Name="BottomTick" Height="4" SnapsToDevicePixels="True" Placement="Bottom" Fill="{TemplateBinding Foreground}"/> </Grid> <ControlTemplate.Triggers> <Trigger Property="TickPlacement" Value="TopLeft"> <Setter Property="Visibility" Value="Visible" TargetName="TopTick"/> </Trigger> <Trigger Property="TickPlacement" Value="BottomRight"> <Setter Property="Visibility" Value="Visible" TargetName="BottomTick"/> </Trigger> <Trigger Property="TickPlacement" Value="Both"> <Setter Property="Visibility" Value="Visible" TargetName="TopTick"/> <Setter Property="Visibility" Value="Visible" TargetName="BottomTick"/> </Trigger> <Trigger Property="IsEnabled" Value="false"> <Setter Property="Background" Value="{StaticResource zoomBkgrnd}" TargetName="Border"/> <Setter Property="BorderBrush" Value="{StaticResource zoomBkgrnd}" TargetName="Border"/> </Trigger> <!-- Use a rotation to create a Vertical Slider form the default Horizontal --> <Trigger Property="Orientation" Value="Vertical"> <Setter Property="LayoutTransform" TargetName="GridRoot"> <Setter.Value> <RotateTransform Angle="-90"/> </Setter.Value> </Setter> <!-- Track rotates itself based on orientation so need to force it back --> <Setter TargetName="PART_Track" Property="Orientation" Value="Horizontal"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary>

    Read the article

  • iPhone App link to iStore for commission

    - by Simon
    Is it possible to link from an iPhone application to the iStore so a user can (a) play a sample of music and then navigate to that track in order to buy it? In a bit more detail: the application lists a number of tracks for a particular artist (a recommendation by the app based on user criteria). The user scrolls down the list and finds a track that they are interested in. They play the 30 second sample (as you would in the iStore) and then, if they like it, they press on a link that takes them to the iStore where they can purchase the track. If they buy the track, then the application gets 5% of the money paid for the track. I have looked through the web and found a number of suggestions but nothing seems to fit the specification above. I would be very grateful if anyone is able to tell me whether this is possible and some clues as to how it would be done. Thanks, Simon...

    Read the article

  • TFS 2010 Source Branches Never The Same

    - by Lukasz
    I have my root branch lets call it Alpha and one branch that was branched from that root lets call it Beta. I made some changes in the Beta branch and merged them back to Alpha. In theory now Alpha and Beta should be identical branches and when I do a diff they are identical. If I attempt to merge Alpha with Beta again without making any changes the changes I originally merged from Beta to Alpha will merge again from Alpha to Beta. Completing that merge and checking in the branches are the same. Now I can merge again. I can do this over and over again with no end. I was just wondering if anyone has ran into this problem before and how it can be fix. At first I thought it was harmless but when I make more changes in the Beta branch and merge the new changes as well as the original changes get merges overriding changes to these files making a mess. Thanks!

    Read the article

  • Can I bypass an intermediate object in hibernate

    - by Jherico
    I have top level entities TRACK, MEDIA_GROUP and MEDIA, each with an integer primary key. I also have a join table from TRACK to MEDIA_GROUP which is 1:1 and MEDIA has a FK column into MEDIA_GROUP. I'm trying to find a way in hibernate to map a collection of Media directly into the Track object, bypassing the creation of a MediaGroup object. Basically I want to turn this: TRACK <-> MEDIA_TRACK_MAP <-> MEDIA_GROUP <-> MEDIA into this: TRACK <-> MEDIA_TRACK_MAP <-> MEDIA But the join column between MEDIA_TRACK_MAP and MEDIA isn't the primary key of MEDIA.

    Read the article

  • User Guide to Dropbox Shared Folders

    - by Matthew Guay
    Dropbox is an incredibly useful tool for keeping all your files synced between your computers and the cloud.  Here we’re going to look at how you can keep all of your team on the same page with Dropbox shared folders. Creating a Shared Folder Setting up a shared folder in Dropbox is easy.  Add the files you want to share to a folder in Dropbox on your computer, then right-click in the folder, select Dropbox, and then choose Share This Folder.   Alternately, log into your Dropbox account online, click the drop-down menu beside the folder you want to share, and click Share this folder. Now, enter the email addresses of the people you want to share the folder with, and optionally enter a message explaining why you’re sharing the folder. The people you invite will receive an email inviting them to view and join the shared folder.  If they haven’t signed up for Dropbox, they can directly signup; otherwise, they can simply log into their Dropbox account and start adding or editing files. Shared folders have a slightly different icon in your Dropbox.  Notice the shared folder on the left has an icon with 2 people, while the folder on the right that is not shared, shows previews of its contents. See Your Shared Folder’s History Whenever your collaborators with your shared folders add or change files, you will see a tooltip notification telling you what changed. You can also view the changes online.  Log into your Dropbox account in your browser and select the Events tab.  This shows all changes to your Dropbox, but you can view only the changes in your shared folder by selecting its name on the left sidebar. Now you can see all recent changes to your folder, and can also see who added or removed each file.   On the bottom of the page, you can even add a comment that all the collaborators will see. If someone deleted a file you still need, you can restore it by clicking its link in this online history.  Or, you can view any deleted files by right-clicking in your Dropbox folder in Explorer.  Select Dropbox, and then click Show Deleted Files.   Get Notified When a Change is Made You’re not always in front of your computer; you’ve got a life beyond your projects, after all (at least hopefully).  If you really want to stay connected to what’s happening with your project, though, you can easily do that no matter where you are. Your shared Dropbox folder’s history page offers an RSS feed of all changes to the folder.  Click  the Subscribe to this feed hyperlink. Now, in the popup that opens, click “Copy to clipboard” so you can use this RSS feed. You can subscribe to RSS feeds through many web browsers, email clients, dedicated feed readers, and more.  In Firefox, Internet Explorer 7/8, or Opera, you can paste the feed address into your address bar and subscribe to the feed directly in your browser.   However, subscribing to the feed in a desktop application won’t help you much when you’re away from your computer.  One great option is to subscribe in the popular Google Reader.  Then you can check your feed from any browser, on any computer or mobile device. To add your Dropbox feed to Google Reader, log into Google Reader (link below), click Add a subscription on the top left, paste your RSS feed from Dropbox, and click Add.   Now you can see any changes to files or folders in Google Reader. You can even add your feed to your iGoogle homepage.  Click the Add it Now button on the right in the front page of Google Reader to add your feeds to iGoogle.   Now you can see updates on your files from your homepage.  If you’re using a different computer, just login to your Google account to see what’s happening. You can also access your Google Reader feeds from many programs and apps for most major Smartphones including iPhone, Windows Phone, and Blackberry. Receive a Tweet or Text When Changes are Made If you’re a hyper-connected individual, chances are you send and receive tweets on the go.  If so, this might be the best way for you to get notified when changes are made to your Dropbox shared folder.  To do this, first create a new Twitter account to publish your changes through.  If you don’t want the whole world to see your updates, click Settings and set your new Twitter account to Private. Once the new account is created, follow it with your normal Twitter account so you’ll see updates. Now, let’s publish our Dropbox RSS feed to Twitter.  Create an account with Twitterfeed (link below). Once your account is setup, add your feed to it.  Name your feed, and enter your Feed address from Dropbox.  Click Advanced Settings to make your feed work just like you want. In Advanced Settings, change the frequency to “Every 30 mins” to make sure you’re updated on changes as quick as possible.  You can also change other settings if you like. Click “Continue to Step 2”, and then click Twitter under the available services to add your account. Make sure your signed into your new Twitter account, and then click Authenticate Twitter. Allow the application. Now, finally, click Create Service. Whenever a change is made, you will receive a tweet via your new Twitter account.  And since you can receive tweets via text message or many mobile applications, you’ll never be very far away from your Dropbox changes!   Conclusion Dropbox shared folders are a great way to keep your whole team working together on the same files in a project.  And with these handy tricks, you can keep up with your shared files wherever you are! There are a lot of cool things you can do with Dropbox make sure to check out our posts on adding Dropbox to the Windows 7 Start menu, Accessing Dropbox files from Chrome, and Syncing your Pidgin Profile Across Multiple PCs. Links Signup or access your Dropbox account Google Reader Tweet your feed with Twitterfeed Similar Articles Productive Geek Tips How to Add and Manage Shared Folders on Windows Home ServerManage User Accounts in Windows Home ServerAdd "My Dropbox" to Your Windows 7 Start MenuComplete Guide to Networking Windows 7 with XP and VistaMoving Your Personal Data Folders in Windows Vista the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • Upload to PPA succeeded but packages doesn't appear

    - by lorin
    I'm trying to upload packages to my PPA for the first time. I want to use the PPA for customized versions of the OpenStack Compute (nova) project, so I tried to do a test by uploading packages corresponding to the bexar release of this project (lp:nova/bexar), with a new version number and changelog entry. I signed the source packages using my OpenGPG key, which has been uploaded to the ubuntu keyserver: $ dch -v 2011.1-0ubuntu2-isi1 -D lucid "ISI bexar build #1" $ dpkg-buildpackage -s -rfakeroot -tc -D -k4C8A14AB When I tried to upload the files to the repository, it seemed to work (real email obscured): $ dput ppa:lorinh/ppa nova_2011.2~bzr663-1isi1_source.changes Checking signature on .changes gpg: Signature made Fri 11 Feb 2011 03:52:50 PM EST using RSA key ID 4C8A14AB gpg: Good signature from "Lorin Hochstein <lorin@...>" Good signature on /home/lorin/packaging/nova_2011.2~bzr663-1isi1_source.changes. Checking signature on .dsc gpg: Signature made Fri 11 Feb 2011 03:52:44 PM EST using RSA key ID 4C8A14AB gpg: Good signature from "Lorin Hochstein <lorin@...>" Good signature on /home/lorin/packaging/nova_2011.2~bzr663-1isi1.dsc. Uploading to ppa (via ftp to ppa.launchpad.net): Uploading nova_2011.2~bzr663-1isi1.dsc: done. Uploading nova_2011.2~bzr663-1isi1.tar.gz: done. Uploading nova_2011.2~bzr663-1isi1_source.changes: done. However, the packages aren't listed on my PPA page. If I try to upload again, I get the error: $ dput ppa:lorinh/ppa nova_2011.2~bzr663-1isi1_source.changes Package has already been uploaded to ppa on ppa.launchpad.net Nothing more to do for nova_2011.2~bzr663-1isi1_source.changes Am I supposed to do something next? How do I track down what wrong? As of this writing, it's been a day and a half since I've done the upload.

    Read the article

  • Algorithm for tracking progress of controller method running in background

    - by SilentAssassin
    I am using Codeigniter framework for PHP on Windows platform. My problem is I am trying to track progress of a controller method running in background. The controller extracts data from the database(MySQL) then does some processing and then stores the results again in the database. The complete aforesaid process can be considered as a single task. A new task can be assigned while another task is running. The newly assigned task will be added in a queue. So if I can track progress of the controller, I can show status for each of these tasks. Like I can show "Pending" status for tasks in the queue, "In Progress" for tasks running and "Done" for tasks that are completed. Main Issue: Now first thing I need to find is an algorithm to track the progress of how much amount of execution the controller method has completed and that means tracking how much amount of method has completed execution. For instance, this PHP script tracks progress of array being counted. Here the current state and state after total execution are known so it is possible to track its progress. But I am not able to devise anything analogous to it in my case. Maybe what I am trying to achieve is programmtically not possible. If its not possible then suggest me a workaround or a completely new approach. If some details are pending you can mention them. Sorry for my ignorance this is my first post here. I welcome you to point out my mistakes. EDIT: Database outline: The URL(s) and keyword(s) are first entered by user which are stored in a database table called link_master and keyword_master respectively. Then keywords are extracted from all the links present in this table and compared with keywords entered by user and their frequency is calculated which is the final result. And the results are stored in another table called link_result. Now sub-links are extracted from the domain links and stored in a table called sub_link_master. Now again the keywords are extracted from these sub-links and the corresponding results are stored in a table called sub_link_result. The number of records cannot be defined beforehand as the number of links on any web page can be different. Only the cardinality of *link_result* table can be known which will be equal to multiplication of number of keyword(s) and URL(s) . I insert multiple records at a time using this resource. Controller outline: The controller extracts keywords from a web page and also extracts keywords from all the links present on that page. There is a method called crawlLink. I used Rolling Curl to extract keywords and web page content. It has callback function which I used for extracting keywords alongwith generating results and extracting valid sub-links. There is a insertResult method which stores results for links and sub-links in the respective tables. Yes, the processing depends on the number of records. The more the number of records, the more time it takes to execute: Consider this scenario: Number of Domain Links = 1 Number of Keywords = 3 Number of Domain Links Result generated = 3 (3 x 1 as described in the question) Number of Sub Links generated = 41 Number of Sub Links Result = 117 (41 x 3 = 123 but some links are not valid or searchable) Approximate time taken for above process to complete = 55 seconds. The above result is for a single link. I want to track the progress of the above results getting stored in database. When all results are stored, the task is complete. If results are getting stored, the task is In Progress. I am not clear how can I track this progress.

    Read the article

  • How Mature is Your Database Change Management Process?

    - by Ben Rees
    .dbd-banner p{ font-size:0.75em; padding:0 0 10px; margin:0 } .dbd-banner p span{ color:#675C6D; } .dbd-banner p:last-child{ padding:0; } @media ALL and (max-width:640px){ .dbd-banner{ background:#f0f0f0; padding:5px; color:#333; margin-top: 5px; } } -- Database Delivery Patterns & Practices Further Reading Organization and team processes How do you get your database schema changes live, on to your production system? As your team of developers and DBAs are working on the changes to the database to support your business-critical applications, how do these updates wend their way through from dev environments, possibly to QA, hopefully through pre-production and eventually to production in a controlled, reliable and repeatable way? In this article, I describe a model we use to try and understand the different stages that customers go through as their database change management processes mature, from the very basic and manual, through to advanced continuous delivery practices. I also provide a simple chart that will help you determine “How mature is our database change management process?” This process of managing changes to the database – which all of us who have worked in application/database development have had to deal with in one form or another – is sometimes known as Database Change Management (even if we’ve never used the term ourselves). And it’s a difficult process, often painfully so. Some developers take the approach of “I’ve no idea how my changes get live – I just write the stored procedures and add columns to the tables. It’s someone else’s problem to get this stuff live. I think we’ve got a DBA somewhere who deals with it – I don’t know, I’ve never met him/her”. I know I used to work that way. I worked that way because I assumed that making the updates to production was a trivial task – how hard can it be? Pause the application for half an hour in the middle of the night, copy over the changes to the app and the database, and switch it back on again? Voila! But somehow it never seemed that easy. And it certainly was never that easy for database changes. Why? Because you can’t just overwrite the old database with the new version. Databases have a state – more specifically 4Tb of critical data built up over the last 12 years of running your business, and if your quick hotfix happened to accidentally delete that 4Tb of data, then you’re “Looking for a new role” pretty quickly after the failed release. There are a lot of other reasons why a managed database change management process is important for organisations, besides job security, not least: Frequency of releases. Many business managers are feeling the pressure to get functionality out to their users sooner, quicker and more reliably. The new book (which I highly recommend) Lean Enterprise by Jez Humble, Barry O’Reilly and Joanne Molesky provides a great discussion on how many enterprises are having to move towards a leaner, more frequent release cycle to maintain their competitive advantage. It’s no longer acceptable to release once per year, leaving your customers waiting all year for changes they desperately need (and expect) Auditing and compliance. SOX, HIPAA and other compliance frameworks have demanded that companies implement proper processes for managing changes to their databases, whether managing schema changes, making sure that the data itself is being looked after correctly or other mechanisms that provide an audit trail of changes. We’ve found, at Red Gate that we have a very wide range of customers using every possible form of database change management imaginable. Everything from “Nothing – I just fix the schema on production from my laptop when things go wrong, and write it down in my notebook” to “A full Continuous Delivery process – any change made by a dev gets checked in and recorded, fully tested (including performance tests) before a (tested) release is made available to our Release Management system, ready for live deployment!”. And everything in between of course. Because of the vast number of customers using so many different approaches we found ourselves struggling to keep on top of what everyone was doing – struggling to identify patterns in customers’ behavior. This is useful for us, because we want to try and fit the products we have to different needs – different products are relevant to different customers and we waste everyone’s time (most notably, our customers’) if we’re suggesting products that aren’t appropriate for them. If someone visited a sports store, looking to embark on a new fitness program, and the store assistant suggested the latest $10,000 multi-gym, complete with multiple weights mechanisms, dumb-bells, pull-up bars and so on, then he’s likely to lose that customer. All he needed was a pair of running shoes! To solve this issue – in an attempt to simplify how we understand our customers and our offerings – we built a model. This is a an attempt at trying to classify our customers in to some sort of model or “Customer Maturity Framework” as we rather grandly term it, which somehow simplifies our understanding of what our customers are doing. The great statistician, George Box (amongst other things, the “Box” in the Box-Jenkins time series model) gave us the famous quote: “Essentially all models are wrong, but some are useful” We’ve taken this quote to heart – we know it’s a gross over-simplification of the real world of how users work with complex legacy and new database developments. Almost nobody precisely fits in to one of our categories. But we hope it’s useful and interesting. There are actually a number of similar models that exist for more general application delivery. We’ve found these from ThoughtWorks/Forrester, from InfoQ and others, and initially we tried just taking these models and replacing the word “application” for “database”. However, we hit a problem. From talking to our customers we know that users are far less further down the road of mature database change management than they are for application development. As a simple example, no application developer, who wants to keep his/her job would develop an application for an organisation without source controlling that code. Sure, he/she might not be using an advanced Gitflow branching methodology but they’ll certainly be making sure their code gets managed in a repo somewhere with all the benefits of history, auditing and so on. But this certainly isn’t the case (yet) for the database – a very large segment of the people we speak to have no source control set up for their databases whatsoever, even at the most basic level (for example, keeping change scripts in a source control system somewhere). By the way, if this is you, Red Gate has a great whitepaper here, on the barriers people face getting a source control process implemented at their organisations. This difference in maturity is the same as you move in to areas such as continuous integration (common amongst app developers, relatively rare for database developers) and automated release management (growing amongst app developers, very rare for the database). So, when we created the model we started from scratch and biased the levels of maturity towards what we actually see amongst our customers. But, what are these stages? And what level are you? The table below describes our definitions for four levels of maturity – Baseline, Beginner, Intermediate and Advanced. As I say, this is a model – you won’t fit any of these categories perfectly, but hopefully one will ring true more than others. We’ve also created a PDF with a flow chart to help you find which of these groups most closely matches your team:  Download the Database Delivery Maturity Framework PDF here   Level D1 – Baseline Work directly on live databases Sometimes work directly in production Generate manual scripts for releases. Sometimes use a product like SQL Compare or similar to do this Any tests that we might have are run manually Level D2 – Beginner Have some ad-hoc DB version control such as manually adding upgrade scripts to a version control system Attempt is made to keep production in sync with development environments There is some documentation and planning of manual deployments Some basic automated DB testing in process Level D3 – Intermediate The database is fully version-controlled with a product like Red Gate SQL Source Control or SSDT Database environments are managed Production environment schema is reproducible from the source control system There are some automated tests Have looked at using migration scripts for difficult database refactoring cases Level D4 – Advanced Using continuous integration for database changes Build, testing and deployment of DB changes carried out through a proper database release process Fully automated tests Production system is monitored for fast feedback to developers   Does this model reflect your team at all? Where are you on this journey? We’d be very interested in knowing how you get on. We’re doing a lot of work at the moment, at Red Gate, trying to help people progress through these stages. For example, if you’re currently not source controlling your database, then this is a natural next step. If you are already source controlling your database, what about the next stage – continuous integration and automated release management? To help understand these issues, there’s a summary of the Red Gate Database Delivery learning program on our site, alongside a Patterns and Practices library here on Simple-Talk and a Training Academy section on our documentation site to help you get up and running with the tools you need to progress. All feedback is welcome and it would be great to hear where you find yourself on this journey! This article is part of our database delivery patterns & practices series on Simple Talk. Find more articles for version control, automated testing, continuous integration & deployment.

    Read the article

  • What calls trigger a new batch?

    - by sebf
    I am finding my project is starting to show performance degradation and I need to optimize it. The answer to my previous question and this presentation from NVidia have helped greatly in understanding the performance characteristics of code using the GPU but there are a couple of things that aren't clear that I need to know to optimize my drawing. Specifically, what calls make the distinction between batches. I know that any state changes cause a new batch, so that includes: Render State Changes Buffer Changes Shader Changes Render Target Changes Correct? What else counts as a 'state change'? Does each Draw**Primitive() call constitute a new batch? Even if I were to issue the same call twice, with no state changes, or call it once on on part of the buffer, then again on another? If I were to update a buffer, but not change the bindings, would that be a new batch? That presentation and a DX9 page suggest using all of the texture slots available, which I take to mean loading multiple objects in 'parallel' by mapping their buffers/shaders/textures to slots 1-16. But I am not sure how this works - surely to do this you would need to change the buffer binding and that would count as a state change? (or is it a case of you do but it saves 16 calls so its OK?)

    Read the article

  • What is the best way for an experienced developer to work on a WordPress blog

    - by nanothief
    I'm beginning to work on my first WordPress blog, however I've noticed most tutorials just have you do modifications (such as theme changes, installing plugins) on the production site. This worries me for a few reasons: No backups No version control If you make a mistake, your production site is affected Developing remotely is slower than local development, especially when tweaking css files. I understand why WordPress works like this - it allows people with no development experience to manage their WordPress installation (or the one provided by their service provider). It also allows you to work on the WordPress installation without having ssh access to the server. However as I am confortable working with tools like git and ssh, and am using a virtual server for the blog, this isn't very important to me. So I was wondering what techniques experienced developers use when working on a WordPress blog. For example: Do you develop locally, then push the changes to the live site? How do you do this? How do you manage database changes and backups? What do you store under version control (if anything)? If a plugin changes the database, do you somehow track the changes it does in version control, so you can rollback the changes done by the plugin if you need to? Or maybe I'm just overcomplicating everything if working on the production site isn't as risky as I am thinking it would be. I would appreciate any answers either way.

    Read the article

  • Is "If a method is re-used without changes, put the method in a base class, else create an interface" a good rule-of-thumb?

    - by exizt
    A colleague of mine came up with a rule-of-thumb for choosing between creating a base class or an interface. He says: Imagine every new method that you are about to implement. For each of them, consider this: will this method be implemented by more than one class in exactly this form, without any change? If the answer is "yes", create a base class. In every other situation, create an interface. For example: Consider the classes cat and dog, which extend the class mammal and have a single method pet(). We then add the class alligator, which doesn't extend anything and has a single method slither(). Now, we want to add an eat() method to all of them. If the implementation of eat() method will be exactly the same for cat, dog and alligator, we should create a base class (let's say, animal), which implements this method. However, if it's implementation in alligator differs in the slightest way, we should create an IEat interface and make mammal and alligator implement it. He insists that this method covers all cases, but it seems like over-simplification to me. Is it worth following this rule-of-thumb?

    Read the article

  • Media Kind in iTunes COM for Windows SDK

    - by Joel Verhagen
    I recently found out about the awesomeness of the iTunes COM for Windows SDK. I am using Python with win32com to talk to my iTunes library. Needless to say, my head is in the process of exploding. This API rocks. I have one issue though, how do I access the Media Kind attribute of the track? I looked through the help file provided in the SDK and saw no sign of it. If you go into iTunes, you can modify the track's media kind. This way if you have an audiobook that is showing up in your music library, you can set the Media Kind to Audiobook and it will appear in the Books section in iTunes. Pretty nifty. The reason I ask is because I have a whole crap load of audiobooks that are showing up in my LibraryPlaylist. Here is my code thus far. import win32com.client iTunes = win32com.client.gencache.EnsureDispatch('iTunes.Application') track = win32com.client.CastTo(iTunes.LibraryPlaylist.Tracks.Item(1), 'IITFileOrCDTrack') print track.Artist, '-', track.Name print print 'Is this track an audiobook?' print 'How the hell should I know?' Thanks in advance.

    Read the article

  • Javascript API not working for Chrome or Safari on JW Player 5.9

    - by Lando
    I am working on a custom interface for the JW Player which displays the current track title and has play/pause, next track, previous track and volume toggle buttons. It works for IE8/9 and FF but fails for Chrome and Safari. Chrome's console gives the following error: Uncaught TypeError: Object # has no method 'addControllerListener' This is the code I am using for testing. <div id="container">Loading the player ...</div> <script type="text/javascript"> jwplayer("container").setup({ image: "preview.jpg", height: 320, width: 480, modes: [ { type: "html5" }, { type: "flash", src: "player.swf" } ], 'playlist': [ { 'file': "audio/01.mp3", 'title': "Track 1" }, { 'file': "audio/02.mp3", 'title': "Track 2" }, { 'file': "audio/03.mp3", 'title': "Track 3" } ], }); function playerReady(obj) { player = document.getElementById(obj.id); displayFirstItem(); }; function displayFirstItem() { try { playlist = player.getPlaylist(); } catch(e) { setTimeout("displayFirstItem()", 100); } player.addControllerListener('ITEM', 'itemMonitor'); itemMonitor({index:0}); }; function itemMonitor(obj) { $('#nowplaying').html('<span><strong>Now Playing:</strong> ' + playlist[obj.index]['title'] + '</span>'); }; </script> <div id="nowplaying"></div> <div class="control_bar"> <ul> <li onclick='player.sendEvent("play");'>[ &#8250; ] Play / Pause</li> <li onclick='player.sendEvent("prev");'>[ &laquo; ] Previous item</li> <li onclick='player.sendEvent("next");'>[ &raquo; ] Next item</li> </ul> </div> I have searched and tried several modifications, like adding the javascriptid parameter, nothing seems to work for Chrome or Safari. Any ideas? Thanks

    Read the article

  • How do I keep track of the session for each servlet request, until I use it? Singletons wont work?

    - by corgrath
    For each servlet request I get, I pass, perhaps, 10 methods before I am at where I need to check something in the session and I need the HttpSession. The only way I can get the HttpSession is from the HttpServletRequest, correct? How do I keep track of the session for each servlet request? Unfortuantly I cannot simple make a singleton (ex, SessionInformation.instance().getAttribute("name")) because that session would then be used over all requests. Is there a way to store the session globally for each request without having to pass it (or it's information) down all the methods just in case I need it?

    Read the article

< Previous Page | 63 64 65 66 67 68 69 70 71 72 73 74  | Next Page >