Search Results

Search found 58440 results on 2338 pages for 'data cleansing'.

Page 119/2338 | < Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >

  • New Master Data Services Content (What Else?!)

    - by KnightReign
    msdev is about to launch a series of training courses for Master Data Services that covers early concepts, setup, model building, configuration, security model setup and the object model. This should be a great series and promises to be a solid introduction to the product. http://www.msdev.com/Directory/SeriesDescription.aspx?CourseId=155 If you haven’t noticed lately, there is a great set of entries up on the SSIS team blog now. These are quality blog entries that really get into the details of...(read more)

    Read the article

  • Asp.net MVC jQuery Ajax calls to JsonResult return no data

    - by Maslow
    I have this script loaded on a page: (function() { window.alert('bookmarklet started'); function AjaxSuccess(data, textStatus, xmlHttpRequest) { if (typeof (data) == 'undefined') { return alert('Data is undefined'); } alert('ajax success' + (data || ': no data')); } function AjaxError(xmlHttpRequest, textStatus, errorThrown) { alert('ajax failure:' + textStatus); } /*imaginarydevelopment.com/Sfc*/ var destination = { url: 'http://localhost:3041/Bookmarklet/SaveHtml', type: 'POST', success: AjaxSuccess, error: AjaxError, dataType: 'text',contentType: 'application/x-www-form-urlencoded' }; if (typeof (jQuery) == 'undefined') { return alert('jQuery not defined'); } if (typeof ($jq) == 'undefined') { if (typeof ($) != 'undefined') { $jq = $; } else { return alert('$jq->jquerify not defined'); } } if ($jq('body').length <= 0) { return alert('Could not query body length'); } if ($jq('head title:contains(BookmarkletTest)').length > 0) { alert('doing test'); destination.data = { data: 'BookmarkletTestAjax' }; $jq.ajax(destination); return; } })(); when it is run locally in VS2008's cassini the ajax success shows the returned string from Asp.net MVC, when it is run remotely the ajax success data is null. Here's the controller method that is firing both locally and when run remotely: [AcceptVerbs(HttpVerbs.Post | HttpVerbs.Get)] public string SaveHtml(string data) { var path = getPath(Server.MapPath); System.IO.File.WriteAllText(path,data); Console.WriteLine("SaveHtml called"); Debug.WriteLine("SaveHtml called"); //return Json(new { result = "SaveHtml Success" }); return "SaveHtml Success"; } Once i have it working I was going to remove the GET, but currently accessing the SaveHtml method directly from the webbrowser produces the expected results when testing. So there's something wrong in my javascript I believe, because when I step through there with chrome's developer tools, I see the data is null, and the xmlHttpRequest doesn't appear to have the expected result in it anywhere either. I'm loading jquery via http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js

    Read the article

  • Zend: How to populate data to checkboxes?

    - by NAVEED
    I am working on zend. I have a form with some checkboxes. I want to get data from database and populate this data to this form. If '1' is stored in table field then tick the check box otherwise leave it alone. In textboxes and dropdowns, data is easily populated but how to check a checkbox in action. I am creating checkboxes and textboxes elements like this in form.php: // Person name $person = $this->CreateElement('text', 'name'); $person->setLabel('Name'); $elements[] = $person; // Organization name $person = $this->CreateElement('text', 'organization'); $person->setLabel('Organization'); $elements[] = $person; // isAdmin Checkbox $isAdmin = $this->CreateElement('checkbox', 'isAdmin'); $isAdmin->setLabel('Admin'); $elements[] = $isAdmin; $this->addElements($elements); $this->setElementDecorators(array('ViewHelper')); // set form decorator (what script will render the form) $this->setDecorators(array(array('ViewScript' , array('viewScript' => 'organization/accessroles-form.phtml')))); And populating data like this (for example): // Prepare data to populate $data['name'] = 'Naveed'; $data['organization'] = 'ABC'; $data['isAdmin'] = '1'; // Populate editable data $this->view->form->populate( $data ); It is populating data in textboxes but not checking the checkbox? Any idea that how to check a checkbox from action? Thanks

    Read the article

  • How to delete duplicate records in MySQL by retaining those fields with data in the duplicate item b

    - by NJTechGuy
    I have few thousands of records with few 100 fields in a MySQL Table. Some records are duplicates and are marked as such. Now while I can simply delete the dupes, I want to retain any other possible valuable non-null data which is not present in the original version of the record. Hope I made sense. For instance : a b c d e f key dupe -------------------- 1 d c f k l 1 x 2 g h j 1 3 i h u u 2 4 u r t 2 x From the above sample table, the desired output is : a b c d e f key dupe -------------------- 2 g c h k j 1 3 i r h u u 2 If you look at it closely, the duplicate is determined by using the key (it is the same for 2 records, so the one that has an 'x' for dupe field is the one to be deleted by retaining some of the fields from the dupe (like c, e values for key 1). Please let me know if you need more info about this puzzling problem. Thanks a tonne! p.s : If it is not possible using MySQL, a PERL/Python script sample would be awesome! Thanks!

    Read the article

  • what data structure should I use for hash lookup as well as binary search?

    - by zebraman
    I am working on a school homework. I have a list of names. I want to be able to perform binary search on these names (find all names between a lower and upper bound) for first name as well as last name, and perform keyword searches as well (this will be accomplished using hashing. For example, if I have the names Garfield Cat Snoopy Dog Captain Crunch Fat Cat then a binary search of first names (C,H) will return Captain Crunch, Fat Cat, and Garfield Cat. A binary search of last names (Cr,D) will return Captain Crunch. A keyword search of 'cat' will return Fat Cat and Garfield Cat. I understand binary search will only work on a sorted list, but since I am planning on searching two different criteria, I will have to sort the list by last name or first name depending on what I'm searching for. I feel like it will be too inefficient to have to resort the list each time I want to perform a new binary search. Would it just be better for me to set up and maintain two sorted lists (one for sorted by first name, one for sorted by last name)? Also, for hashing, will I have to set up a different table of names for that as well? I understand each keyword will hash to some value determined by a hash function, and this value (or key) is a table address where the corresponding names are stored. So I just want to know what would be the best way to solve this problem? Maintaining separate structures, or is there a way to efficiently do everything I want with just one data structure?

    Read the article

  • Firebird Data Access Designer (DDEX) installation

    - by persian Dev
    hi i want to use firebird library , and i followed its instruction as below , but i get "The referenced component 'FirebirdSql.Data.Firebird' could not be found." error. instruction : Prerequisites Make sure that you have Visual Studio .NET 2005 Standard or higher edition. Express editions are not supported. Registry update Remember to update the path in FirebirdDDEXProviderPackageLess32.reg or FirebirdDDEXProviderPackageLess64.reg, places where to update it are marked %Path%. Install the .reg file into the registry. Machine.config update Add the following two sections to machine.config (located usually at C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config and C:\WINDOWS\Microsoft.NET\Framework64\v2.0.50727\CONFIG\machine.config on 64-bit system). <configuration> ... <configSections> ... <section name="firebirdsql.data.firebirdclient" type="System.Data.Common.DbProviderConfigurationHandler, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> ... </configSections> ... <system.data> <DbProviderFactories> ... <add name="FirebirdClient Data Provider" invariant="FirebirdSql.Data.FirebirdClient" description=".Net Framework Data Provider for Firebird" type="FirebirdSql.Data.FirebirdClient.FirebirdClientFactory, FirebirdSql.Data.FirebirdClient, Version=%Version%, Culture=%Culture%, PublicKeyToken=%PublicKeyToken%" /> ... </DbProviderFactories> </system.data> ... </configuration> And subst: %Version% With the version of the provider assembly that you have in the GAC. %Culture% With the culture of the provider assembly that you have in the GAC. %PublicKeyToken% With the PublicKeyToken of the provider assembly that you have in the GAC.

    Read the article

  • Core Data: Inverse relationship only mirrors when I edit the mutableset. Not sure why.

    - by zorn
    My model is setup so Business has many clients, Client has one business. Inverse relationship is setup in the mom file. I have a unit test like this: - (void)testNewClientFromBusiness { PTBusiness *business = [modelController newBusiness]; STAssertTrue([[business clients] count] == 0, @"is actually %d", [[business clients] count]); PTClient *client = [business newClient]; STAssertTrue([business isEqual:[client business]], nil); STAssertTrue([[business clients] count] == 1, @"is actually %d", [[business clients] count]); } I implement -newClient inside of PTBusiness like this: - (PTClient *)newClient { PTClient *client = [NSEntityDescription insertNewObjectForEntityForName:@"Client" inManagedObjectContext:[self managedObjectContext]]; [client setBusiness:self]; [client updateLocalDefaultsBasedOnBusiness]; return client; } The test fails because [[business clients] count] is still 0 after -newClient is called. If I impliment it like this: - (PTClient *)newClient { PTClient *client = [NSEntityDescription insertNewObjectForEntityForName:@"Client" inManagedObjectContext:[self managedObjectContext]]; NSMutableSet *group = [self mutableSetValueForKey:@"clients"]; [group addObject:client]; [client updateLocalDefaultsBasedOnBusiness]; return client; } The tests passes. My question(s): So am I right in thinking the inverse relationship is only updated when I interact with the mutable set? That seems to go against some other Core Data docs I've read. Is the fact that this is running in a unit test without a run loop have anything to do with it? Any other troubleshooting recommendations? I'd really like to figure out why I can't set up the relationship at the client end.

    Read the article

  • Is a red-black tree my ideal data structure?

    - by Hugo van der Sanden
    I have a collection of items (big rationals) that I'll be processing. In each case, processing will consist of removing the smallest item in the collection, doing some work, and then adding 0-2 new items (which will always be larger than the removed item). The collection will be initialised with one item, and work will continue until it is empty. I'm not sure what size the collection is likely to reach, but I'd expect in the range 1M-100M items. I will not need to locate any item other than the smallest. I'm currently planning to use a red-black tree, possibly tweaked to keep a pointer to the smallest item. However I've never used one before, and I'm unsure whether my pattern of use fits its characteristics well. 1) Is there a danger the pattern of deletion from the left + random insertion will affect performance, eg by requiring a significantly higher number of rotations than random deletion would? Or will delete and insert operations still be O(log n) with this pattern of use? 2) Would some other data structure give me better performance, either because of the deletion pattern or taking advantage of the fact I only ever need to find the smallest item? Update: glad I asked, the binary heap is clearly a better solution for this case, and as promised turned out to be very easy to implement. Hugo

    Read the article

  • What is a data structure for quickly finding non-empty intersections of a list of sets?

    - by Andrey Fedorov
    I have a set of N items, which are sets of integers, let's assume it's ordered and call it I[1..N]. Given a candidate set, I need to find the subset of I which have non-empty intersections with the candidate. So, for example, if: I = [{1,2}, {2,3}, {4,5}] I'm looking to define valid_items(items, candidate), such that: valid_items(I, {1}) == {1} valid_items(I, {2}) == {1, 2} valid_items(I, {3,4}) == {2, 3} I'm trying to optimize for one given set I and a variable candidate sets. Currently I am doing this by caching items_containing[n] = {the sets which contain n}. In the above example, that would be: items_containing = [{}, {1}, {1,2}, {2}, {3}, {3}] That is, 0 is contained in no items, 1 is contained in item 1, 2 is contained in itmes 1 and 2, 2 is contained in item 2, 3 is contained in item 2, and 4 and 5 are contained in item 3. That way, I can define valid_items(I, candidate) = union(items_containing[n] for n in candidate). Is there any more efficient data structure (of a reasonable size) for caching the result of this union? The obvious example of space 2^N is not acceptable, but N or N*log(N) would be.

    Read the article

  • Sql Exception: Error converting data type numeric to numeric

    - by Lucifer
    Hello We have a very strange issue with a database that has been moved from staging to production. The first time the database was moved it was by detaching, copying and reattaching, the second time we tried restoring from a backup of the staging. Both SQL Servers are the same version of MS SQL 2008, running on 64 bit hardware. The code accessing the database is the same build, built using the .net 2.0 framework. Here is the error message and some of the stack trace: Exception Details: System.Data.SqlClient.SqlException: Error converting data type numeric to numeric. Stack Trace: [SqlException (0x80131904): Error converting data type numeric to numeric.] System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) +1953274 System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) +4849707 System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) +194 System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) +2392 System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) +204 System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) +954 System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) +162 System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) +175 System.Data.SqlClient.SqlCommand.ExecuteNonQuery() +137 Version Information: Microsoft .NET Framework Version:2.0.50727.4200; ASP.NET Version:2.0.50727.4016

    Read the article

  • Fetching Core Data for Tableview on iPhone - Tutorial leaves me with crashes when adding items to a

    - by Gordon Fontenot
    Been following the Core Data tutorial on Apple's developer site, and all is good until I have to add something to the fetched store. I am getting this error after a successful build and load when I try to add a new item to the list: Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Invalid update: invalid number of rows in section 0. The number of rows contained in an existing section after the update (0) must be equal to the number of rows contained in that section before the update (0), plus or minus the number of rows inserted or deleted from that section (1 inserted, 0 deleted). Due to the fact that the fetch goes through fine, and that if I replace the fetching with eventList = [[NSMutableArray alloc] init] it works as expected (without persistance, of course), I am led to believe that the problem comes from not creating the Mutable Array correctly. Here's the problematic part of the code: NSFetchRequest *request = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Event" inManagedObjectContext:managedObjectContext]; [request setEntity:entity]; NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"creationDate" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [request setSortDescriptors:sortDescriptors]; [sortDescriptors release]; [sortDescriptor release]; NSError *error; NSMutableArray *mutableFetchResults = [[managedObjectContext executeFetchRequest:request error:&error] mutableCopy]; if (mutableFetchResults = nil) { //Handle the error } [self setEventList:mutableFetchResults]; [mutableFetchResults release]; [request release]; I have tried switching the NSArrays in the second chunk out with NSMutableArrays, but I still get the same error. For reference, the section of code that is throwing the error when I try adding an entry is here: [eventList insertObject:event atIndex:0]; NSIndexPath *indexPath = [NSIndexPath indexPathForRow:0 inSection:0]; [self.tableView insertRowsAtIndexPaths:[NSArray arrayWithObject:indexPath] withRowAnimation:UITableViewRowAnimationFade]; [self.tableView scrollToRowAtIndexPath:[NSIndexPath indexPathForRow:0 inSection:0] atScrollPosition:UITableViewScrollPositionTop animated:YES]; it errors out at the insertRowsAtIndexPaths call. Thanks in advance for any help

    Read the article

  • How to stop Android GPS using "Mobile data"

    - by prepbgg
    My app requests location updates with "minTime" set to 2 seconds. When "Mobile data" is switched on (in the phone's settings) and GPS is enabled the app uses "mobile data" at between 5 and 10 megabytes per hour. This is recorded in the ICS "Data usage" screen as usage by "Android OS". In an attempt to prevent this I have unticked Settings-"Location services"-"Google's location service". Does this refer to Assisted GPS, or is it something more than that? Whatever it is, it seems to make no difference to my app's internet access. As further confirmation that it is the GPS usage by my app that is causing the mobile data access I have observed that the internet data activity indicator on the status bar shows activity when and only when the GPS indicator is present. The only way to prevent this mobile data usage seems to be to switch "Mobile data" off, and GPS accuracy seems to be almost as good without the support of mobile data. However, it is obviously unsatisfactory to have to switch mobile data off. The only permissions in the Manifest are "android.permission.ACCESS_FINE_LOCATION" (and "android.permission.WRITE_EXTERNAL_STORAGE"), so the app has no explicit permission to use internet data. The LocationManager code is ` criteria.setAccuracy(Criteria.ACCURACY_FINE); criteria.setSpeedRequired(false); criteria.setAltitudeRequired(false); criteria.setBearingRequired(true); criteria.setCostAllowed(false); criteria.setPowerRequirement(Criteria.NO_REQUIREMENT); bestProvider = lm.getBestProvider(criteria, true); if (bestProvider != null) { lm.requestLocationUpdates(bestProvider, gpsMinTime, gpsMinDistance, this); ` The reference for LocationManager.getBestProvider says If no provider meets the criteria, the criteria are loosened ... Note that the requirement on monetary cost is not removed in this process. However, despite setting setCostAllowed to false the app still incurs a potential monetary cost. What else can I do to prevent the app from using mobile data?

    Read the article

  • Store the cache data locally

    - by Lu Lu
    Hello, I develops a C# Winform application, it is a client and connect to web service to get data. The data returned by webservice is a DataTable. Client will display it on a DataGridView. My problem is that: Client will take more time to get all data from server (web service is not local with client). So I must to use a thread to get data. This is my model: Client create a thread to get data - thread complete and send event to client - client display data on datagridview on a form. However, when user closes the form, user can open this form in another time, and client must get data again. This solution will cause the client slowly. So, I think about a cached data: Client <---get/add/edit/delete--- Cached Data ---get/add/edit/delete---Server (web service) Please give me some suggestions. Example: cached data should be developed in another application which is same host with client? Or cached data is running in client. Please give me some techniques to implement this solution. If having any examples, please give me. Thanks.

    Read the article

  • jquery fail to retrieve accurate data from sibling field.

    - by i need help
    wonder what's wrong <table id=tblDomainVersion> <tr> <td>Version</td> <td>No of sites</td> </tr> <tr> <td class=clsversion>1.25</td> <td><a id=expanddomain>3 sites</a><span id=spanshowall></span></td> </tr> <tr> <td class=clsversion>1.37</td> <td><a id=expanddomain>7 sites</a><span id=spanshowall></span></td> </tr> </table> $('#expanddomain').click(function() { //the siblings result incorrect //select first row will work //select second row will no response var versionforselected= $('#expanddomain').parent().siblings("td.clsversion").text(); alert(versionforselected); $.ajax({ url: "ajaxquery.php", type: "POST", data: 'version='+versionforselected, timeout: 900000, success: function(output) { output= jQuery.trim(output); $('#spanshowall').html(output); }, }); });

    Read the article

  • Is there a more memory efficient way to search through a Core Data database?

    - by Kristian K
    I need to see if an object that I have obtained from a CSV file with a unique identifier exists in my Core Data Database, and this is the code I deemed suitable for this task: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity; entity = [NSEntityDescription entityForName:@"ICD9" inManagedObjectContext:passedContext]; [fetchRequest setEntity:entity]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"uniqueID like %@", uniqueIdentifier]; [fetchRequest setPredicate:pred]; NSError *err; NSArray* icd9s = [passedContext executeFetchRequest:fetchRequest error:&err]; [fetchRequest release]; if ([icd9s count] > 0) { for (int i = 0; i < [icd9s count]; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; NSString *name = [[icd9s objectAtIndex:i] valueForKey:@"uniqueID"]; if ([name caseInsensitiveCompare:uniqueIdentifier] == NSOrderedSame && name != nil) { [pool release]; return [icd9s objectAtIndex:i]; } [pool release]; } } return nil; After more thorough testing it appears that this code is responsible for a huge amount of leaking in the app I'm writing (it crashes on a 3GS before making it 20 percent through the 1459 items). I feel like this isn't the most efficient way to do this, any suggestions for a more memory efficient way? Thanks in advance!

    Read the article

  • Silverlight 4 caching issue?

    - by DavidS
    I am currently experiencing a weird caching problem it would seem. When I load my data intially, I return all the data within given dates and my graph looks as follows: Then I filter the data to return a subset of the original data for the same date range (not that it matters) and I get the following view of my data: However, I intermittently get the following when I refresh the same filterd view of the data: One can see that not all the data gets cached but only some of it i.e. for 12 Dec 2010 and 5 dec 2010(not shown here). I've looked at my queries and the correct data is getting pulled out. It is only on the presentation layer i.e. on Mainpage.xaml.cs that this erroneous data seems to exist. I've stepped through the code and the data is corect through all the layers except on the presentation layer. Has anyone experienced this before? Is there some sort of caching going in the background that is keeping that data in the background as I've got browser caching off? I am using the LoadOperation in the callback method within the Load method of the DomainContext if that helps...

    Read the article

  • How can I synchronize one set of data with another?

    - by RenderIn
    I have an old database and a new database. The old records were converted to the new database recently. All our old applications continue to point to the old database, but the new applications point to the new database. Currently the old database is the only one being updated, so throughout the day the new database becomes out of sync. It is acceptable for the new database to be out of sync for a day, so until all our applications are pointed to the new database I just need to write a nightly cron job that will bring it up to date. I do not want to purge the new database and run the complete conversion script each night, as that would reduce uptime and would create a mess in our auditing of that table. I'm thinking about selecting all the data from the old database, converting it to the new database structure in memory, and then checking for the existence of each record before inserting it in the new database. After that's done, I'd select everything from the new database and check if it exists in the old one, and if not delete it. Is this the simplest way to do this?

    Read the article

  • How to recover deleted files on ext3 fs

    - by Mike
    I have a drive which was using the ext3 filesystem. I am told that about 10Gb of data was deleted off the drive (probably via rm). The drive is currently mounted as read-only to preserve all data. Does anyone know of a method to restore some or all of the data? Also if it helps, the OS was Fedora. I've also been told that the data is mostly ASCII fortan source code and Matlab files. Conclusion I have finally managed to get the data back, and with the simplest means ever! After weeks of trying and failing to bring back much of any data, I brought someone in today to take a look at it and offer suggestions, he simply cd'd to the directory and everything was there! It was never lost in the first place!!! Needless to say I feel really dumb right now, but I learned quite a lot with this whole fiasco. At any rate, while I was looking through data forensics solutions, I found that the Autopsy, or more specifically the SleuthKit was the most helpful. So I will accept that as the final answer. I would also like to note for anyone that comes across this later on that the most up-voted (currently) answer by sekenre was also helpful and I learned a lot, but ultimately it did not help with the type (very many, and some being very large) of files I was dealing with. So thank to all you that provided suggestions and wish you all the best!

    Read the article

  • ZFS Data Loss Scenarios

    - by Obtuse
    I'm looking toward building a largish ZFS Pool (150TB+), and I'd like to hear people experiences about data loss scenarios due to failed hardware, in particular, distinguishing between instances where just some data is lost vs. the whole filesystem (of if there even is such a distinction in ZFS). For example: let's say a vdev is lost due to a failure like an external drive enclosure losing power, or a controller card failing. From what I've read the pool should go into a faulted mode, but if the vdev is returned the pool should recover? or not? or if the vdev is partially damaged, does one lose the whole pool, some files, etc.? What happens if a ZIL device fails? Or just one of several ZILs? Truly any and all anecdotes or hypothetical scenarios backed by deep technical knowledge are appreciated! Thanks! Update: We're doing this on the cheap since we are a small business (9 people or so) but we generate a fair amount of imaging data. The data is mostly smallish files, by my count about 500k files per TB. The data is important but not uber-critical. We are planning to use the ZFS pool to mirror 48TB "live" data array (in use for 3 years or so), and use the the rest of the storage for 'archived' data. The pool will be shared using NFS. The rack is supposedly on a building backup generator line, and we have two APC UPSes capable of powering the rack at full load for 5 mins or so.

    Read the article

  • Building an ASP.Net 4.5 Web forms application - part 4

    - by nikolaosk
    ?his is the fourth post in a series of posts on how to design and implement an ASP.Net 4.5 Web Forms store that sells posters on line.There are 3 more posts in this series of posts.Please make sure you read them first.You can find the first post here. You can find the second post here. You can find the third post here.  In this new post we will build on the previous posts and we will demonstrate how to display the posters per category.We will add a ListView control on the PosterList.aspx and will bind data from the database. We will use the various templates.Then we will write code in the PosterList.aspx.cs to fetch data from the database.1) Launch Visual Studio and open your solution where your project lives2) Open the PosterList.aspx page. We will add some markup in this page. Have a look at the code below  <section class="posters-featured">                    <ul>                         <asp:ListView ID="posterList" runat="server"                            DataKeyNames="PosterID"                            GroupItemCount="3" ItemType="PostersOnLine.DAL.Poster" SelectMethod="GetPosters">                            <EmptyDataTemplate>                                      <table id="Table1" runat="server">                                            <tr>                                                  <td>We have no data.</td>                                            </tr>                                     </table>                              </EmptyDataTemplate>                              <EmptyItemTemplate>                                     <td id="Td1" runat="server" />                              </EmptyItemTemplate>                              <GroupTemplate>                                    <tr ID="itemPlaceholderContainer" runat="server">                                          <td ID="itemPlaceholder" runat="server"></td>                                    </tr>                              </GroupTemplate>                              <ItemTemplate>                                    <td id="Td2" runat="server">                                          <table>                                                <tr>                                                      <td>&nbsp;</td>                                                      <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <img src="<%#:Item.PosterImgpath%>"                                                        width="100" height="75" border="1"/></a>                                             </td>                                            <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <span class="PosterName">                                                        <%#:Item.PosterName%>                                                    </span>                                                </a>                                                            <br />                                                <span class="PosterPrice">                                                               <b>Price: </b><%#:String.Format("{0:c}", Item.PosterPrice)%>                                                </span>                                                <br />                                                        </td>                                                </tr>                                          </table>                                    </td>                              </ItemTemplate>                              <LayoutTemplate>                                    <table id="Table2" runat="server">                                          <tr id="Tr1" runat="server">                                                <td id="Td3" runat="server">                                                      <table ID="groupPlaceholderContainer" runat="server">                                                            <tr ID="groupPlaceholder" runat="server"></tr>                                                      </table>                                                </td>                                          </tr>                                          <tr id="Tr2" runat="server"><td id="Td4" runat="server"></td></tr>                                    </table>                              </LayoutTemplate>                        </asp:ListView>                    </ul>               </section>  3) We have a ListView control on the page called PosterList. I set the ItemType property to the Poster class and then the SelectMethod to the GetPosters method.  I will create this method later on.   (ItemType="PostersOnLine.DAL.Poster" SelectMethod="GetPosters")Then in the code below  I have the data-binding expression Item  available and the control becomes strongly typed.So when the user clicks on the link of the poster's category the relevant information will be displayed (photo,name and price)                                            <td>                                                <a href="PosterDetails.aspx?posterID=<%#:Item.PosterID%>">                                                    <img src="<%#:Item.PosterImgpath%>"                                                        width="100" height="75" border="1"/></a>                                             </td>4)  Now we need to write the simple method to populate the ListView control.It is called GetPosters method.The code follows   public IQueryable<Poster> GetPosters([QueryString("id")] int? PosterCatID)        {            PosterContext ctx = new PosterContext();            IQueryable<Poster> query = ctx.Posters;            if (PosterCatID.HasValue && PosterCatID > 0)            {                query = query.Where(p=>p.PosterCategoryID==PosterCatID);            }            return query;                    } This is a very simple method that returns information about posters related to the PosterCatID passed to it.I bind the value from the query string to the PosterCatID parameter at run time.This is all possible due to the QueryStringAttribute class that lives inside the System.Web.ModelBinding and gets the value of the query string variable id.5) I run my application and then click on the "Midfilders" link. Have a look at the picture below to see the results.  In the Site.css file I added some new CSS rules to make everything more presentable. .posters-featured {    width:840px;    background-color:#efefef;}.posters-featured   a:link, a:visited,    a:active, a:hover {        color: #000033;    }.posters-featured    a:hover {        background-color: #85c465;    }  6) I run the application again and this time I do not choose any category, I simply navigate to the PosterList.aspx page. I see all the posters since no query string was passed as a parameter.Have a look at the picture below   ?ake sure you place breakpoints in the code so you can see what is really going on.In the next post I will show you how to display poster details.Hope it helps!!!

    Read the article

  • Oracle data warehouse design - fact table acting as a dimension?

    - by Elizabeth
    THANKS: Both answers here are very helpful, but I could only pick one. I really appreciate the advice! our datawarehouse will be used more for workflow reports than traditional analytical reports. Our users care about "current picture" far more than history. (though history matters, too.) We are a government entity that does not have costs or related calculations. Mostly just counts of people within given locations and with related history. We are using Oracle, and I have found distinct advantage in using the star join whenever possible and would like to rearchitect everything to as closely resemble the star schema as is reasonable for our business uses. Speed in this DW is vital, and a number of tests have already proven the star schema approach to me. Our "person" table is key - it contains over 4 million records and will be the most frequently used source in queries. It can be seen at the center of a star with multiple dimensions (like age, gender, affiliation, location, etc.). It is a very LONG table, particularly when I join it to the address and contact information. However, it is more like a dimension table when we start looking at history. For example, there are two different history tables that have a person key pointing to the person table. One has over 20 million records and the other has almost 50 million and grows daily. Is this table a fact table or a dimension table? Can one work as both? If so, is that going to be a big performance problem? Is it common to query more off of a dimension than a fact? What happens if a DIFFERENT fact table that uses the person table as a dimension is actually only 60,000 records (much smaller.). I think my problem is that our data and use of it does not fit with the commonly use examples of star schemas. CLARIFICATION: Some good thoughts have been added below, but perhaps I left too much out to really explain well. Here's some more info: We handle a voter database. We don't have any measures except voter counts by various groups: voter counts by party, by age, by location; voter counts by ballot type and election, by ballot status and election, etc. We do have a "voting history" log as well as an activity audit log (change of address, party, etc.). We have information on which voters are election workers and all that related information. I figure I'll get to the peripheral stuff later. For now I'm focusing on our two major "business processes": voter registration(which IS a voter.) and election turnout. In the first, voter is a fact. In the second, voter is a dimension, along with party, election, and type of ballot. (and in case anyone is worried - no we don't know HOW people vote. Just that they do. LOL ) I hope that clarifies things a bit.

    Read the article

  • Flex/PHP/XML data issue

    - by reado
    I have built a simple application in Flex. When the application loads, a GET request is made to the xmlService.php file with parameters "fetchData=letters". This tells the PHP to return the XML code. In Flex Debug I can see the XML data being sent by the PHP to the flex client. What I need it to do is populate the first drop down box (id="letter") with this data, however nothing is being received by Flex. I added an Alert.show() to check what was being returned but when the application runs, the alert is blank. Can anyone help? Thanks in advance. Image: http://static.readescdn.com/misc/flex.gif // Flex <?xml version="1.0" encoding="utf-8"?> <s:WindowedApplication xmlns:fx="http://ns.adobe.com/mxml/2009" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" width="300" height="300" creationComplete="windowedapplication1_creationCompleteHandler(event)"> <fx:Script> <![CDATA[ import mx.collections.ArrayCollection; import mx.controls.Alert; import mx.events.FlexEvent; import mx.rpc.events.FaultEvent; import mx.rpc.events.ResultEvent; import spark.events.IndexChangeEvent; protected function windowedapplication1_creationCompleteHandler(event:FlexEvent):void { var params:Object = {'fetchData':'letters'}; xmlService.send(params); } protected function xmlService_resultHandler(event:ResultEvent):void { var id:String = xmlService.lastResult.data.id.value; //Alert.show(xmlService.lastResult.data.id.value); if(id == 'letter') { letter.dataProvider = xmlService.lastResult.data.letter; letter.enabled = true; } else if(id == 'number') { number.dataProvider = xmlService.lastResult.data.number; number.enabled = true; submit.enabled = true; } else { submit.label = 'No Data!'; } } protected function xmlService_faultHandler(event:FaultEvent):void { Alert.show(event.fault.message); } protected function letter_changeHandler(event:IndexChangeEvent):void { var params:Object = {'fetchData':'numbers'}; xmlService.send(params); } ]]> </fx:Script> <fx:Declarations> <s:HTTPService id="xmlService" url="URL_GOES_HERE" method="POST" useProxy="false" resultFormat="e4x" result="xmlService_resultHandler(event)" fault="xmlService_faultHandler(event)"/> </fx:Declarations> <s:DropDownList x="94" y="10" id="letter" enabled="false" change="letter_changeHandler(event)" labelField="value"></s:DropDownList> <s:DropDownList x="94" y="39" id="number" enabled="false" labelField="value"></s:DropDownList> <s:Button x="115" y="68" label="Submit" id="submit" enabled="false"/> </s:WindowedApplication> // PHP <? if(isset($_POST['fetchData'])) { if($_POST['fetchData'] == 'letters') { $xml = '<data> <id value="letters"/> <letter label="Letter A" value="a"/> <letter label="Letter B" value="b"/> <letter label="Letter C" value="c"/> </data>'; } else if($_POST['fetchData'] == 'numbers') { $xml = '<data> <id value="letters"/> <number label="Number 1" value="1"/> <number label="Number 2" value="2"/> <number label="Number 3" value="3"/> </data>'; } else { $xml = '<data> <result value="'.$_POST['fetchData'].'"/> </data>'; } echo $xml; } else { echo '<data> <result value="NULL"/> </data>'; } ?>

    Read the article

  • SQL Server – SafePeak “Logon Trigger” Feature for Managing Data Access

    - by pinaldave
    Lately I received an interesting question about the abilities of SafePeak for SQL Server acceleration software: Q: “I would like to use SafePeak to make my CRM application faster. It is an application we bought from some vendor, after a while it became slow and we can’t reprogram it. SafePeak automated caching sounds like an easy and good solution for us. But, in my application there are many servers and different other applications services that address its main database, and some even change data, and I feel that there is a chance that some servers that during the connection process we may miss some. Is there a way to ensure that SafePeak will be aware of all connections to the SQL Server, so its cache will remain intact?” Interesting question, as I remember that SafePeak (http://www.safepeak.com/Product/SafePeak-Overview) likes that all traffic to the database will go thru it. I decided to check out the features of SafePeak latest version (2.1) and seek for an answer there. A: Indeed I found SafePeak has a feature they call “Logon Trigger” and is designed for that purpose. It is located in the user interface, under: Settings -> SQL instances management  ->  [your instance]  ->  [Logon Trigger] tab. From here you activate / deactivate it and control a white-list of enabled server IPs and Login names that SafePeak will ignore them. Click to Enlarge After activation of the “logon trigger” Safepeak server is notified by the SQL Server itself on each new opened connection. Safepeak monitors those connections and decides if there is something to do with them or not. On a typical installation SafePeak likes all application and users connections to go via SafePeak – this way it knows about data and schema updates immediately (real time). With activation of the safepeak “logon trigger”  a special CLR trigger is deployed on the SQL server and notifies Safepeak on any connection that has not arrived via SafePeak. In such cases Safepeak can act to clear and lock the cache or to ignore it. This feature enables to make sure SafePeak will be aware of all connections so SafePeak cache will maintain exactly correct all times. So even if a user, like a DBA will connect to the SQL Server not via SafePeak, SafePeak will know about it and take actions. The notification does not impact the work of that connection, the user or application still continue to do whatever they planned to do. Note: I found that activation of logon trigger in SafePeak requires that SafePeak SQL login will have the next permissions: 1) CONTROL SERVER; 2) VIEW SERVER STATE; 3) And the SQL Server instance is CLR enabled; Seeing SafePeak in action, I can say SafePeak brings fantastic resource for those who seek to get performance for SQL Server critical apps. SafePeak promises to accelerate SQL Server applications in just several hours of installation, automatic learning and some optimization configuration (no code changes!!!). If better application and database performance means better business to you – I suggest you to download and try SafePeak. The solution of SafePeak is indeed unique, and the questions I receive are very interesting. Have any more questions on SafePeak? Please leave your question as a comment and I will try to get an answer for you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

< Previous Page | 115 116 117 118 119 120 121 122 123 124 125 126  | Next Page >