Search Results

Search found 74550 results on 2982 pages for 'wcf data service'.

Page 142/2982 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • What processes would make the selling of a hard drive that previously held sensitive data justifiable? [closed]

    - by user12583188
    Possible Duplicate: Securely erasing all data from a hard drive In my personal collection are an increasing number of relatively new drives, only put on the shelf due to upgrades; in the past I have never sold hard drives with used machines for fear of having the encrypted password databases that have been stored on them compromised, but as their numbers increase I find myself more tempted to do so (due to the $$$ I know they're worth on the used market). What tools then exist to make the recovery of data from said drives difficult to the extent that selling them could be justified? Another way of saying this would be: what tools/method exist for making the attempts at recovery of any data previously stored on a certain drive impractical? I assume that it is always possible to recover data from a drive that is in working order. I assume also there are some methods for preventing recovery of data due a program called dban, and one particular feature in macOSX that deals with permanently deleting data from a disk.

    Read the article

  • Silverlight 4 Tools, WCF RIA Services and Themes Released

    This morning we published the final release of the Silverlight 4 Tools for Visual Studio and WCF RIA Services. In April, when Silverlight 4 was released, the tools were still in RC status. Today, they are no longer and are officially released. There is no new update to Silverlight itself, but these tools are the final bits of this version. Get the Tools If you have a clean machine you can get everything you need using the Web Platform Installer by clicking on the link at the Silverlight community...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Associate a texture to an object (from a data-model, not graphical point of view).

    - by Raveline
    I'm writing a roguelike where objects and floor can be made of different materials. For instance, let's say we can have a wooden chair, an iron chair, a golden chair, and so on. I've got an Object class (I know, the name is terrible), which is more or less using a composite pattern, and a Material class. Material have different important properties (noise, color...). For the time being, there are 5 different instances of materials, created at the initialization of the game. How would connect an instance of Object with one of the 5 instances of materials ? I see three simple solutions : Using a pointer. Simple and brutal. Using an integer material-id, then get the materials out of a table when engine manipulates the object for various purposes (display, attack analysis, etc.). Not very beautiful, I think, and not very flexible. Using an integer material-id, then get the materials out of a std::map. A bit more flexible, but still not perfect. Do you see other possibilities ? If not, what would you choose (and why) ? Thanks in advance !

    Read the article

  • populate uipicker view with results from core data DB using an NSArray

    - by Chris
    I am trying to populate a UIPickerView with the results of a NSFetchRequest. The results of the NSFetchRequest are then stored in an NSArray. I am using Core Data to interact with the SQLite DB. I have a simple class file that contains a UIPickerView that is associated with a storyboard scene in my project. The header file for the class looks like the following, ViewControllerUsers.h #import <UIKit/UIKit.h> #import "AppDelegate.h" @interface ViewControllerUsers : UIViewController <NSFetchedResultsControllerDelegate, UIPickerViewDelegate, UIPickerViewDataSource> { NSArray *dictionaries; } @property (nonatomic, strong) NSFetchedResultsController *fetchedResultsController; // Core Data @property (strong, nonatomic) NSManagedObjectContext *managedObjectContext; @property (nonatomic, strong) NSArray *users; @property (strong, nonatomic) IBOutlet UIPickerView *uiPickerViewUsers; @property (weak, nonatomic) IBOutlet UIBarButtonItem *btnDone; @property (weak, nonatomic) IBOutlet UIButton *btnChangePin; // added for testing purposes @property (nonatomic, strong) NSArray *usernames; - (IBAction)dismissScene:(id)sender; - (IBAction)changePin:(id)sender; @end The implementation file looks like the following, ViewControllerUsers.m #import "ViewControllerUsers.h" @interface ViewControllerUsers () @end @implementation ViewControllerUsers // Core Data @synthesize managedObjectContext = _managedObjectContext; @synthesize uiPickerViewUsers = _uiPickerViewUsers; @synthesize usernames = _usernames; - (id)initWithNibName:(NSString *)nibNameOrNil bundle:(NSBundle *)nibBundleOrNil { self = [super initWithNibName:nibNameOrNil bundle:nibBundleOrNil]; if (self) { // Custom initialization } return self; } - (void)viewDidLoad { [super viewDidLoad]; // Do any additional setup after loading the view. // Core Data if (_managedObjectContext == nil) { _managedObjectContext = [(AppDelegate *)[[UIApplication sharedApplication] delegate] managedObjectContext]; NSLog(@"After _managedObjectContext: %@", _managedObjectContext); } NSFetchRequest *request = [NSFetchRequest fetchRequestWithEntityName:@"Account"]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Account" inManagedObjectContext:_managedObjectContext]; request.resultType = NSDictionaryResultType; request.propertiesToFetch = [NSArray arrayWithObject:[[entity propertiesByName] objectForKey:@"username"]]; request.returnsDistinctResults = YES; _usernames = [_managedObjectContext executeFetchRequest:request error:nil]; NSLog (@"names: %@",_usernames); } -(NSInteger)numberOfComponentsInPickerView:(UIPickerView *)pickerView { //One column return 1; } -(NSInteger)pickerView:(UIPickerView *)pickerView numberOfRowsInComponent:(NSInteger)component { //set number of rows return _usernames.count; } -(NSString *)pickerView:(UIPickerView *)pickerView titleForRow:(NSInteger)row forComponent:(NSInteger)component { //set item per row return [_usernames objectAtIndex:row]; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; // Dispose of any resources that can be recreated. } - (void)viewDidUnload { [self setBtnDone:nil]; [self setUiPickerViewUsers:nil]; [self setBtnChangePin:nil]; [super viewDidUnload]; } - (IBAction)dismissScene:(id)sender { [self dismissModalViewControllerAnimated:YES]; } - (IBAction)changePin:(id)sender { } @end The current code is causing the app to crash, but the NSLog is show the results of the NSFetchRequest in the NSArray. I currently think that I am not formatting the results of the NSFetchRequest in the NSArray properly if I had to take a guess. The crash log looks like the following, 2013-06-26 16:49:24.219 KegCop[41233:c07] names: ( { username = blah; }, { username = chris; }, { username = root; } ) 2013-06-26 16:49:24.223 KegCop[41233:c07] -[NSKnownKeysDictionary1 isEqualToString:]: unrecognized selector sent to instance 0xe54d9a0 2013-06-26 16:49:24.223 KegCop[41233:c07] Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSKnownKeysDictionary1 isEqualToString:]: unrecognized selector sent to instance 0xe54d9a0' First throw call stack:

    Read the article

  • R concentrating data frame

    - by user1631503
    I have a data frame like this >X_com Day_1 Day_2 Day_3 Day_4 Day_5 Day_6 Day_7 Day_8 Day_9 Day_10 1 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 4 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 I need to concentrate all the values into one column and add another column with "1;" So I did this > X_new=matrix(1,8,2) > X_new[1,]=paste(X_com[1,1], X_com[1,2],X_com[1,3],X_com [1,4],X_com[1,5],X_com[1,6],X_com[1,7],X_com[1,8],X_com [1,9],X_com[1,10], sep="") > X_new[2,]=paste(X_com[2,1], X_com[2,2],X_com[2,3],X_com [2,4],X_com[2,5],X_com[2,6],X_com[2,7],X_com[2,8],X_com [2,9],X_com[2,10], sep="") > X_new[3,]=paste(X_com[3,1], X_com[3,2],X_com[3,3],X_com [3,4],X_com[3,5],X_com[3,6],X_com[3,7],X_com[3,8],X_com [3,9],X_com[3,10], sep="") > X_new[4,]=paste(X_com[4,1], X_com[4,2],X_com[4,3],X_com [4,4],X_com[4,5],X_com[4,6],X_com[4,7],X_com[4,8],X_com [4,9],X_com[4,10], sep="") > X_new[5,]=paste(X_com[5,1], X_com[5,2],X_com[5,3],X_com [5,4],X_com[5,5],X_com[5,6],X_com[5,7],X_com[5,8],X_com [5,9],X_com[5,10], sep="") > X_new[6,]=paste(X_com[6,1], X_com[6,2],X_com[6,3],X_com [6,4],X_com[6,5],X_com[6,6],X_com[6,7],X_com[6,8],X_com [6,9],X_com[6,10], sep="") > X_new[7,]=paste(X_com[7,1], X_com[7,2],X_com[7,3],X_com [7,4],X_com[7,5],X_com[7,6],X_com[7,7],X_com[7,8],X_com [7,9],X_com[7,10], sep="") > X_new[8,]=paste(X_com[8,1], X_com[8,2],X_com[8,3],X_com [8,4],X_com[8,5],X_com[8,6],X_com[8,7],X_com[8,8],X_com [8,9],X_com[8,10], sep="") > X_new[1:8,2]="1;" > as.data.frame(X_new) V1 V2 1 0000000001 1; 2 0000000000 1; 3 0000000000 1; 4 0000000000 1; 5 0000000000 1; 6 0000000000 1; 7 0000000000 1; 8 0000000000 1; I believe there's definitely a faster way of achieving this but have no clue. The other problem is, I have over a thousand of data frame like this needs to be concentrated. I'm still learning how to loop these repetitive steps but is progressing quite slowly. If the original data frames were named uniquely, does that mean I have no choice but to work on each of them individually? Thank you in advance.

    Read the article

  • Is RTD Stateless or Stateful?

    - by [email protected]
    Yes.   A stateless service is one where each request is an independent transaction that can be processed by any of the servers in a cluster.  A stateful service is one where state is kept in a server's memory from transaction to transaction, thus necessitating the proper routing of requests to the right server. The main advantage of stateless systems is simplicity of design. The main advantage of stateful systems is performance. I'm often asked whether RTD is a stateless or stateful service, so I wanted to clarify this issue in depth so that RTD's architecture will be properly understood. The short answer is: "RTD can be configured as a stateless or stateful service." The performance difference between stateless and stateful systems can be very significant, and while in a call center implementation it may be reasonable to use a pure stateless configuration, a web implementation that produces thousands of requests per second is practically impossible with a stateless configuration. RTD's performance is orders of magnitude better than most competing systems. RTD was architected from the ground up to achieve this performance. Features like automatic and dynamic compression of prediction models, automatic translation of metadata to machine code, lack of interpreted languages, and separation of model building from decisioning contribute to achieving this performance level. Because  of this focus on performance we decided to have RTD's default configuration work in a stateful manner. By being stateful RTD requests are typically handled in a few milliseconds when repeated requests come to the same session. Now, those readers that have participated in implementations of RTD know that RTD's architecture is also focused on reducing Total Cost of Ownership (TCO) with features like automatic model building, automatic time windows, automatic maintenance of database tables, automatic evaluation of data mining models, automatic management of models partitioned by channel, geography, etcetera, and hot swapping of configurations. How do you reconcile the need for a low TCO and the need for performance? How do you get the performance of a stateful system with the simplicity of a stateless system? The answer is that you make the system behave like a stateless system to the exterior, but you let it automatically take advantage of situations where being stateful is better. For example, one of the advantages of stateless systems is that you can route a message to any server in a cluster, without worrying about sending it to the same server that was handling the session in previous messages. With an RTD stateful configuration you can still route the message to any server in the cluster, so from the point of view of the configuration of other systems, it is the same as a stateless service. The difference though comes in performance, because if the message arrives to the right server, RTD can serve it without any external access to the session's state, thus tremendously reducing processing time. In typical implementations it is not rare to have high percentages of messages routed directly to the right server, while those that are not, are easily handled by forwarding the messages to the right server. This architecture usually provides the best of both worlds with performance and simplicity of configuration.   Configuring RTD as a pure stateless service A pure stateless configuration requires session data to be persisted at the end of handling each and every message and reloading that data at the beginning of handling any new message. This is of course, the root of the inefficiency of these configurations. This is also the reason why many "stateless" implementations actually do keep state to take advantage of a request coming back to the same server. Nevertheless, if the implementation requires a pure stateless decision service, this is easy to configure in RTD. The way to do it is: Mark every Integration Point to Close the session at the end of processing the message In the Session entity persist the session data on closing the session In the session entity check if a persisted version exists and load it An excellent solution for persisting the session data is Oracle Coherence, which provides a high performance, distributed cache that minimizes the performance impact of persisting and reloading the session. Alternatively, the session can be persisted to a local database. An interesting feature of the RTD stateless configuration is that it can cope with serializing concurrent requests for the same session. For example, if a web page produces two requests to the decision service, these requests could come concurrently to the decision services and be handled by different servers. Most stateless implementation would have the two requests step onto each other when saving the state, or fail one of the messages. When properly configured, RTD will make one message wait for the other before processing.   A Word on Context Using the context of a customer interaction typically significantly increases lift. For example, offer success in a call center could double if the context of the call is taken into account. For this reason, it is important to utilize the contextual information in decision making. To make the contextual information available throughout a session it needs to be persisted. When there is a well defined owner for the information then there is no problem because in case of a session restart, the information can be easily retrieved. If there is no official owner of the information, then RTD can be configured to persist this information.   Once again, RTD provides flexibility to ensure high performance when it is adequate to allow for some loss of state in the rare cases of server failure. For example, in a heavy use web site that serves 1000 pages per second the navigation history may be stored in the in memory session. In such sites it is typical that there is no OLTP that stores all the navigation events, therefore if an RTD server were to fail, it would be possible for the navigation to that point to be lost (note that a new session would be immediately established in one of the other servers). In most cases the loss of this navigation information would be acceptable as it would happen rarely. If it is desired to save this information, RTD would persist it every time the visitor navigates to a new page. Note that this practice is preferred whether RTD is configured in a stateless or stateful manner.  

    Read the article

  • Injecting the mailer service, got "The service definition 'mailer' does not exist"?

    - by Gremo
    This is the class where the service mailer should be injected: namespace Gremo\SkebbyBundle\Transport; use Swift_Mailer; use Gremo\SkebbyBundle\Message\AbstractSkebbyMessage; class MailerTransport extends AbstractSkebbyTransport { /** * @var \Swift_Mailer */ private $mailer; public function __construct(Swift_Mailer $mailer) { $this->mailer = $mailer; } /** * @param \Gremo\SkebbyBundle\Message\AbstractSkebbyMessage $message * @return void */ public function executeTransport(AbstractSkebbyMessage $message) { /* ... */ } } Service id is gremo_skebby.transport.mailer, placed inside mailer.xml file: <parameters> <parameter key="gremo_skebby.converter.swift_message.class"> Gremo\SkebbyBundle\Converter\SwiftMessageConverter </parameter> <parameter key="gremo_skebby.transport.mailer.class"> Gremo\SkebbyBundle\Transport\MailerTransport </parameter> </parameters> <services> <service id="gremo_skebby.converter.swift_message" class="%gremo_skebby.converter.swift_message.class%" public="false" /> <service id="gremo_skebby.transport.mailer" class="%gremo_skebby.transport.mailer.class%" public="false" parent="gremo_skebby.tranport.abstract_transport"> <argument type="service" id="mailer" /> </service> </services> When i try to inject the gremo_skebby.transport.mailer into another service (an helper) i get: Symfony\Component\DependencyInjection\Exception\InvalidArgumentException : The service definition "mailer" does not exist. That is strange, because command php app/console container:debug shows that the mailer service actually exists: mailer container Swift_Mailer I'm loading mailer.xml file dynamically by the extension: if(in_array($configs['transport'], array('http', 'rest', 'mailer'))) { $loader->load('transport.xml'); $loader->load($configs['transport'] . '.xml'); $container->setAlias('gremo_skebby.transport', 'gremo_skebby.transport.' . $configs['transport']); } $loader->load('skebby.xml'); ... and gremo_skebby.transport service is injected into gremo_skebby.skebby service (skebby.xml): <parameters> <parameter key="gremo_skebby.skebby.class"> Gremo\SkebbyBundle\Skebby </parameter> </parameters> <services> <service id="gremo_skebby.skebby" class="%gremo_skebby.skebby.class%"> <argument type="service" id="gremo_skebby.transport" /> </service> </services> A quick test is giving me the same InvalidArgumentException: public function testSkebbyWithMailerTransport() { $loader = new GremoSkebbyExtension(); $container = new ContainerBuilder(); $config = $this->getEmptyConfiguration(); $loader->load(array($config), $container); $this->assertTrue($container->hasDefinition('gremo_skebby.transport.mailer')); $this->assertTrue($container->hasDefinition('gremo_skebby.skebby')); $this->assertInstanceOf('Gremo\SkebbyBundle\Skebby', $container->get('gremo_skebby.skebby')); }

    Read the article

  • SQL Monitor’s data repository: Alerts

    - by Chris Lambrou
    In my previous post, I introduced the SQL Monitor data repository, and described how the monitored objects are stored in a hierarchy in the data schema, in a series of tables with a _Keys suffix. In this post I had planned to describe how the actual data for the monitored objects is stored in corresponding tables with _StableSamples and _UnstableSamples suffixes. However, I’m going to postpone that until my next post, as I’ve had a request from a SQL Monitor user to explain how alerts are stored. In the SQL Monitor data repository, alerts are stored in tables belonging to the alert schema, which contains the following five tables: alert.Alert alert.Alert_Cleared alert.Alert_Comment alert.Alert_Severity alert.Alert_Type In this post, I’m only going to cover the alert.Alert and alert.Alert_Type tables. I may cover the other three tables in a later post. The most important table in this schema is alert.Alert, as each row in this table corresponds to a single alert. So let’s have a look at it. SELECT TOP 100 AlertId, AlertType, TargetObject, [Read], SubType FROM alert.Alert ORDER BY AlertId DESC;  AlertIdAlertTypeTargetObjectReadSubType 165550397:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,10 265549387:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,10 365548187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544157:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542187:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541147:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 11…     So what are we seeing here, then? Well, AlertId is an auto-incrementing identity column, so ORDER BY AlertId DESC ensures that we see the most recent alerts first. AlertType indicates the type of each alert, such as Job failed (6), Backup overdue (14) or Long-running query (12). The TargetObject column indicates which monitored object the alert is associated with. The Read column acts as a flag to indicate whether or not the alert has been read. And finally the SubType column is used in the case of a Custom metric (40) alert, to indicate which custom metric the alert pertains to. Okay, now lets look at some of those columns in more detail. The AlertType column is an easy one to start with, and it brings use nicely to the next table, data.Alert_Type. Let’s have a look at what’s in this table: SELECT AlertType, Event, Monitoring, Name, Description FROM alert.Alert_Type ORDER BY AlertType;  AlertTypeEventMonitoringNameDescription 1100Processor utilizationProcessor utilization (CPU) on a host machine stays above a threshold percentage for longer than a specified duration 2210SQL Server error log entryAn error is written to the SQL Server error log with a severity level above a specified value. 3310Cluster failoverThe active cluster node fails, causing the SQL Server instance to switch nodes. 4410DeadlockSQL deadlock occurs. 5500Processor under-utilizationProcessor utilization (CPU) on a host machine remains below a threshold percentage for longer than a specified duration 6610Job failedA job does not complete successfully (the job returns an error code). 7700Machine unreachableHost machine (Windows server) cannot be contacted on the network. 8800SQL Server instance unreachableThe SQL Server instance is not running or cannot be contacted on the network. 9900Disk spaceDisk space used on a logical disk drive is above a defined threshold for longer than a specified duration. 101000Physical memoryPhysical memory (RAM) used on the host machine stays above a threshold percentage for longer than a specified duration. 111100Blocked processSQL process is blocked for longer than a specified duration. 121200Long-running queryA SQL query runs for longer than a specified duration. 131400Backup overdueNo full backup exists, or the last full backup is older than a specified time. 141500Log backup overdueNo log backup exists, or the last log backup is older than a specified time. 151600Database unavailableDatabase changes from Online to any other state. 161700Page verificationTorn Page Detection or Page Checksum is not enabled for a database. 171800Integrity check overdueNo entry for an integrity check (DBCC DBINFO returns no date for dbi_dbccLastKnownGood field), or the last check is older than a specified time. 181900Fragmented indexesFragmentation level of one or more indexes is above a threshold percentage. 192400Job duration unusualThe duration of a SQL job duration deviates from its baseline duration by more than a threshold percentage. 202501Clock skewSystem clock time on the Base Monitor computer differs from the system clock time on a monitored SQL Server host machine by a specified number of seconds. 212700SQL Server Agent Service statusThe SQL Server Agent Service status matches the status specified. 222800SQL Server Reporting Service statusThe SQL Server Reporting Service status matches the status specified. 232900SQL Server Full Text Search Service statusThe SQL Server Full Text Search Service status matches the status specified. 243000SQL Server Analysis Service statusThe SQL Server Analysis Service status matches the status specified. 253100SQL Server Integration Service statusThe SQL Server Integration Service status matches the status specified. 263300SQL Server Browser Service statusThe SQL Server Browser Service status matches the status specified. 273400SQL Server VSS Writer Service statusThe SQL Server VSS Writer status matches the status specified. 283501Deadlock trace flag disabledThe monitored SQL Server’s trace flag cannot be enabled. 293600Monitoring stopped (host machine credentials)SQL Monitor cannot contact the host machine because authentication failed. 303700Monitoring stopped (SQL Server credentials)SQL Monitor cannot contact the SQL Server instance because authentication failed. 313800Monitoring error (host machine data collection)SQL Monitor cannot collect data from the host machine. 323900Monitoring error (SQL Server data collection)SQL Monitor cannot collect data from the SQL Server instance. 334000Custom metricThe custom metric value has passed an alert threshold. 344100Custom metric collection errorSQL Monitor cannot collect custom metric data from the target object. Basically, alert.Alert_Type is just a big reference table containing information about the 34 different alert types supported by SQL Monitor (note that the largest id is 41, not 34 – some alert types have been retired since SQL Monitor was first developed). The Name and Description columns are self evident, and I’m going to skip over the Event and Monitoring columns as they’re not very interesting. The AlertId column is the primary key, and is referenced by AlertId in the alert.Alert table. As such, we can rewrite our earlier query to join these two tables, in order to provide a more readable view of the alerts: SELECT TOP 100 AlertId, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType ORDER BY AlertId DESC;  AlertIdNameTargetObjectReadSubType 165550Monitoring error (SQL Server data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,9:SqlServer,1,4:Name,s0:,00 265549Monitoring error (host machine data collection)7:Cluster,1,4:Name,s29:srp-mr03.testnet.red-gate.com,7:Machine,1,4:Name,s0:,00 365548Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 465547Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 565546Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s15:FavouriteThings,00 665545Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 765544Log backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 865543Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,00 965542Integrity check overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 1065541Backup overdue7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s4:msdb,00 Okay, the next column to discuss in the alert.Alert table is TargetObject. Oh boy, this one’s a bit tricky! The TargetObject of an alert is a serialized string representation of the position in the monitored object hierarchy of the object to which the alert pertains. The serialization format is somewhat convenient for parsing in the C# source code of SQL Monitor, and has some helpful characteristics, but it’s probably very awkward to manipulate in T-SQL. I could document the serialization format here, but it would be very dry reading, so perhaps it’s best to consider an example from the table above. Have a look at the alert with an AlertID of 65543. It’s a Backup overdue alert for the SqlMonitorData database running on the default instance of granger, my laptop. Each different alert type is associated with a specific type of monitored object in the object hierarchy (I described the hierarchy in my previous post). The Backup overdue alert is associated with databases, whose position in the object hierarchy is root → Cluster → SqlServer → Database. The TargetObject value identifies the target object by specifying the key properties at each level in the hierarchy, thus: Cluster: Name = "granger" SqlServer: Name = "" (an empty string, denoting the default instance) Database: Name = "SqlMonitorData" Well, look at the actual TargetObject value for this alert: "7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s14:SqlMonitorData,". It is indeed composed of three parts, one for each level in the hierarchy: Cluster: "7:Cluster,1,4:Name,s7:granger," SqlServer: "9:SqlServer,1,4:Name,s0:," Database: "8:Database,1,4:Name,s14:SqlMonitorData," Each part is handled in exactly the same way, so let’s concentrate on the first part, "7:Cluster,1,4:Name,s7:granger,". It comprises the following: "7:Cluster," – This identifies the level in the hierarchy. "1," – This indicates how many different key properties there are to uniquely identify a cluster (we saw in my last post that each cluster is identified by a single property, its Name). "4:Name,s14:SqlMonitorData," – This represents the Name property, and its corresponding value, SqlMonitorData. It’s split up like this: "4:Name," – Indicates the name of the key property. "s" – Indicates the type of the key property, in this case, it’s a string. "14:SqlMonitorData," – Indicates the value of the property. At this point, you might be wondering about the format of some of these strings. Why is the string "Cluster" stored as "7:Cluster,"? Well an encoding scheme is used, which consists of the following: "7" – This is the length of the string "Cluster" ":" – This is a delimiter between the length of the string and the actual string’s contents. "Cluster" – This is the string itself. 7 characters. "," – This is a final terminating character that indicates the end of the encoded string. You can see that "4:Name,", "8:Database," and "14:SqlMonitorData," also conform to the same encoding scheme. In the example above, the "s" character is used to indicate that the value of the Name property is a string. If you explore the TargetObject property of alerts in your own SQL Monitor data repository, you might find other characters used for other non-string key property values. The different value types you might possibly encounter are as follows: "I" – Denotes a bigint value. For example, "I65432,". "g" – Denotes a GUID value. For example, "g32116732-63ae-4ab5-bd34-7dfdfb084c18,". "d" – Denotes a datetime value. For example, "d634815384796832438,". The value is stored as a bigint, rather than a native SQL datetime value. I’ll describe how datetime values are handled in the SQL Monitor data repostory in a future post. I suggest you have a look at the alerts in your own SQL Monitor data repository for further examples, so you can see how the TargetObject values are composed for each of the different types of alert. Let me give one further example, though, that represents a Custom metric alert, as this will help in describing the final column of interest in the alert.Alert table, SubType. Let me show you the alert I’m interested in: SELECT AlertId, a.AlertType, Name, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType WHERE AlertId = 65769;  AlertIdAlertTypeNameTargetObjectReadSubType 16576940Custom metric7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 An AlertType value of 40 corresponds to the Custom metric alert type. The Name taken from the alert.Alert_Type table is simply Custom metric, but this doesn’t tell us anything about the specific custom metric that this alert pertains to. That’s where the SubType value comes in. For custom metric alerts, this provides us with the Id of the specific custom alert definition that can be found in the settings.CustomAlertDefinitions table. I don’t really want to delve into custom alert definitions yet (maybe in a later post), but an extra join in the previous query shows us that this alert pertains to the CPU pressure (avg runnable task count) custom metric alert. SELECT AlertId, a.AlertType, at.Name, cad.Name AS CustomAlertName, TargetObject, [Read], SubType FROM alert.Alert a JOIN alert.Alert_Type at ON a.AlertType = at.AlertType JOIN settings.CustomAlertDefinitions cad ON a.SubType = cad.Id WHERE AlertId = 65769;  AlertIdAlertTypeNameCustomAlertNameTargetObjectReadSubType 16576940Custom metricCPU pressure (avg runnable task count)7:Cluster,1,4:Name,s7:granger,9:SqlServer,1,4:Name,s0:,8:Database,1,4:Name,s6:master,12:CustomMetric,1,8:MetricId,I2,02 The TargetObject value in this case breaks down like this: "7:Cluster,1,4:Name,s7:granger," – Cluster named "granger". "9:SqlServer,1,4:Name,s0:," – SqlServer named "" (the default instance). "8:Database,1,4:Name,s6:master," – Database named "master". "12:CustomMetric,1,8:MetricId,I2," – Custom metric with an Id of 2. Note that the hierarchy for a custom metric is slightly different compared to the earlier Backup overdue alert. It’s root → Cluster → SqlServer → Database → CustomMetric. Also notice that, unlike Cluster, SqlServer and Database, the key property for CustomMetric is called MetricId (not Name), and the value is a bigint (not a string). Finally, delving into the custom metric tables is beyond the scope of this post, but for the sake of avoiding any future confusion, I’d like to point out that whilst the SubType references a custom alert definition, the MetricID value embedded in the TargetObject value references a custom metric definition. Although in this case both the custom metric definition and custom alert definition share the same Id value of 2, this is not generally the case. Okay, that’s enough for now, not least because as I’m typing this, it’s almost 2am, I have to go to work tomorrow, and my alarm is set for 6am – eek! In my next post, I’ll either cover the remaining three tables in the alert schema, or I’ll delve into the way SQL Monitor stores its monitoring data, as I’d originally planned to cover in this post.

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • Java: immutability, overuse of stack -- better data structure?

    - by HH
    I overused hashSets but it was slow, then changed to Stacks, speed boost-up. Poly's reply uses Collections.emptyList() as immutable list, cutting out excess null-checkers. No Collections.emptyStack(). Combining the words stack and immutability, from the last experiences, gets "immutable stack" (probably not related to functional prog). Java Api 5 for list interface shows that Stack is an implementing class for list and arraylist, here. The java.coccurrent pkg does not have any immutable Stack data structure. The first hinted of misusing stack. The lack of immutabily in the last and poly's book recommendation leads way to list. Something very primitive, fast, no extra layers, with methods like emptyThing(). Overuse of stack and where I use it DataFile.java: public Stack<DataFile> files; FileObject.java: public Stack<String> printViews = new Stack<String>(); FileObject.java:// private static Stack<Object> getFormat(File f){return (new Format(f)).getFormat();} Format.java: private Stack<Object> getLine(File[] fs,String s){return wF;} Format.java: private Stack<Object> getFormat(){return format;} Positions.java: public static Stack<Integer[]> getPrintPoss(String s,File f,Integer maxViewPerF) Positions.java: Stack<File> possPrint = new Stack<File>(); Positions.java: Stack<Integer> positions=new Stack<Integer>(); Record.java: private String getFormatLine(Stack<Object> st) Record.java: Stack<String> lines=new Stack<String>(); SearchToUser.java: public static final Stack<File> allFiles = findf.getFs(); SearchToUser.java: public static final Stack<File> allDirs = findf.getDs(); SearchToUser.java: private Stack<Integer[]> positionsPrint=new Stack<Integer[]>(); SearchToUser.java: public Stack<String> getSearchResults(String s, Integer countPerFile, Integer resCount) SearchToUser.java: Stack<File> filesToS=Fs2Word.getFs2W(s,50); SearchToUser.java: Stack<String> rs=new Stack<String>(); View.java: public Stack<Integer[]> poss = new Stack<Integer[4]>(); View.java: public static Stack<String> getPrintViewsFileWise(String s,Object[] df,Integer maxViewsPerF) View.java: Stack<String> substrings = new Stack<String>(); View.java: private Stack<String> printViews=new Stack<String>(); View.java: MatchView(Stack<Integer> pss,File f,Integer maxViews) View.java: Stack<String> formatFile; View.java: private Stack<Search> files; View.java: private Stack<File> matchingFiles; View.java: private Stack<String> matchViews; View.java: private Stack<String> searchMatches; View.java: private Stack<String> getSearchResults(Integer numbResults) Easier with List: AllDirs and AllFs, now looping with push, but list has more pow. methods such as addAll [OLD] From Stack to some immutable data structure How to get immutable Stack data structure? Can I box it with list? Should I switch my current implementatios from stacks to Lists to get immutable? Which immutable data structure is Very fast with about similar exec time as Stack? No immutability to Stack with Final import java.io.*; import java.util.*; public class TestStack{ public static void main(String[] args) { final Stack<Integer> test = new Stack<Integer>(); Stack<Integer> test2 = new Stack<Integer>(); test.push(37707); test2.push(80437707); //WHY is there not an error to remove an elment // from FINAL stack? System.out.println(test.pop()); System.out.println(test2.pop()); } }

    Read the article

  • iPhone - Core Data crash on migration

    - by Sergey Zenchenko
    I have problems, when i install app from Xcode all works but if I build app and install it from iTunes I have an error with the database at launch. This happens only than i have changes in the core data model and need to migrate to a new version. At first launch at crashes with message: Thread 0: 0 libSystem.B.dylib 0x00034588 pwrite + 20 1 libsqlite3.dylib 0x000505ec _sqlite3_purgeEligiblePagerCacheMemory + 2808 2 libsqlite3.dylib 0x000243d8 sqlite3_backup_init + 7712 3 libsqlite3.dylib 0x000244ac sqlite3_backup_init + 7924 4 libsqlite3.dylib 0x0000d418 sqlite3_file_control + 4028 5 libsqlite3.dylib 0x000228b4 sqlite3_backup_init + 764 6 libsqlite3.dylib 0x00022dd0 sqlite3_backup_init + 2072 7 libsqlite3.dylib 0x000249a8 sqlite3_backup_init + 9200 8 libsqlite3.dylib 0x00029800 sqlite3_open16 + 11360 9 libsqlite3.dylib 0x0002a200 sqlite3_open16 + 13920 10 libsqlite3.dylib 0x0002ab84 sqlite3_open16 + 16356 11 libsqlite3.dylib 0x00049418 sqlite3_prepare16 + 54056 12 libsqlite3.dylib 0x00002940 sqlite3_step + 44 13 CoreData 0x00011958 _execute + 44 14 CoreData 0x000113e0 -[NSSQLiteConnection execute] + 696 15 CoreData 0x000994be -[NSSQLConnection prepareAndExecuteSQLStatement:] + 26 16 CoreData 0x000be14c -[_NSSQLiteStoreMigrator performMigration:] + 244 17 CoreData 0x000b6c60 -[NSSQLiteInPlaceMigrationManager migrateStoreFromURL:type:options:withMappingModel:toDestinationURL:destinationType:destinationOptions:error:] + 1040 18 CoreData 0x000aceb0 -[NSStoreMigrationPolicy(InternalMethods) migrateStoreAtURL:toURL:storeType:options:withManager:error:] + 92 19 CoreData 0x000ad6f0 -[NSStoreMigrationPolicy migrateStoreAtURL:withManager:metadata:options:error:] + 72 20 CoreData 0x000ac9ee -[NSStoreMigrationPolicy(InternalMethods) _gatherDataAndPerformMigration:] + 880 21 CoreData 0x0000965c -[NSPersistentStoreCoordinator addPersistentStoreWithType:configuration:URL:options:error:] + 1328 At next launches app doesn't loads data from database.

    Read the article

  • ASP.NET application/web service not working on Windows Vista/IIS 7: access right problem?

    - by Achim
    I have a .NET 3.5 based web service running at http://localhost/serivce.svc/. Then I have an ASP.NET application running at http://localhost/myApp. In Application_Load my application reads some XML configuration from the web service. That works fine on my machine, but: On Windows Vista with IIS 7 the request to the web services fails. The web service can be accessed via the browser without any problem. I configured the app pool of my application to run as admin. I added the admin to the IIS_USRS group, but it still cannot access the web service. impersonate=true/false seems not to make a difference.

    Read the article

  • C# System.Data.SQLite Designer Code

    - by Nathan
    I've been messing around with the SQLite Designer in Visual Studio 2008 and I have noticed that when I use the generated Insert/Update statements they run extremely slow. Example: I have a data table with four columns and 5700 rows it took ~5 mins to insert the data into the database table However, I wrote my own database connection and insert methods using parameters and a single transaction and the same 5700 rows were inserted in under 1 second. Why is the generated code so slow and what is benefit to even using it? Thanks. Nathan

    Read the article

  • Core Data Error "Fetch Request must have an entity"

    - by Graeme
    Hi, I've attempted to add the TopSongs parser and Core Data files into my application, and it now builds succesfully, with no errors or warning messages. However, as soon as the app loads, it crashes, giving the following reason: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: 'executeFetchRequest:error: A fetch request must have an entity.' I have renamed all files, including the .xcdatamodel file. Could this be the problem (the renaming of the .xcdatamodel)? I'm assuming this error means that no data can be found. Thanks.

    Read the article

  • Core Data NSPredicate for relationships.

    - by Mugunth Kumar
    My object graph is simple. I've a feedentry object that stores info about RSS feeds and a relationship called Tag that links to "TagValues" object. Both the relation (to and inverse) are to-many. i.e, a feed can have multiple tags and a tag can be associated to multiple feeds. I referred to http://stackoverflow.com/questions/844162/how-to-do-core-data-queries-through-a-relationship and created a NSFetchRequest. But when fetch data, I get an exception stating, NSInvalidArgumentException unimplemented SQL generation for predicate What should I do? I'm a newbie to core data :( I know I've done something terribly wrong... Please help... Thanks -- NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; // Edit the entity name as appropriate. NSEntityDescription *entity = [NSEntityDescription entityForName:@"FeedEntry" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Edit the sort key as appropriate. NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"authorname" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSEntityDescription *tagEntity = [NSEntityDescription entityForName:@"TagValues" inManagedObjectContext:self.managedObjectContext]; NSPredicate *tagPredicate = [NSPredicate predicateWithFormat:@"tagName LIKE[c] 'nyt'"]; NSFetchRequest *tagRequest = [[NSFetchRequest alloc] init]; [tagRequest setEntity:tagEntity]; [tagRequest setPredicate:tagPredicate]; NSError *error = nil; NSArray* predicates = [self.managedObjectContext executeFetchRequest:tagRequest error:&error]; TagValues *tv = (TagValues*) [predicates objectAtIndex:0]; NSLog(tv.tagName); // it is nyt here... NSPredicate *predicate = [NSPredicate predicateWithFormat:@"tag IN %@", predicates]; [fetchRequest setPredicate:predicate]; // Edit the section name key path and cache name if appropriate. // nil for section name key path means "no sections". NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:nil cacheName:@"Root"]; aFetchedResultsController.delegate = self; self.fetchedResultsController = aFetchedResultsController; --

    Read the article

  • How to invalidate a single data item in the .net cache in VB

    - by Craig
    I have the following .NET VB code to set and read objects in cache on a per user basis (i.e. a bit like session) '' Public Shared Sub CacheSet(ByVal Key As String, ByVal Value As Object) Dim userID As String = HttpContext.Current.User.Identity.Name HttpContext.Current.Cache(Key & "_" & userID) = Value End Sub Public Shared Function CacheGet(ByVal Key As Object) Dim returnData As Object = Nothing Dim userID As String = HttpContext.Current.User.Identity.Name returnData = HttpContext.Current.Cache(Key & "_" & userID) Return returnData End Function I use these functions to hold user data that I don't want to access the DB for all the time. However, when the data is updated, I want the cached item to be removed so it get created again. How do I make an Item I set disappear or set it to NOTHING or NULL? Craig

    Read the article

  • Returning to last viewed List page after insert/edit with ASP.NET Dynamic Data

    - by Pat James
    With a pretty standard Dynamic Data site, when the user edits or inserts a new item and saves, the page does a Response.Redirect(table.ListActionPath), which takes the user back to page 1 of the table. If they were editing an item on page 10 (or whatever) and want to edit the next item on that page, they have to remember the page number and navigate back to it. What's the best way to return the user to the list page they last viewed? I can conceive of some solutions using cookies, session state, or query string values to retain this state and making my own Page Template to incorporate it, but I can't help thinking this must be something that was considered when Dynamic Data was created, and there must be something simpler or built-in to the framework that I'm missing here.

    Read the article

  • Adding rows to UItableview and passing data between Viewcontrollers

    - by Jonathan
    I have a list of websites in a plist, when the app loads this populates a tableview (which is inside a navigation controller) But I have added an add button to the navigation bar and then created another View Controller to deal with inputting of the new website (Like Title and URL). It is very similar to how the contacts app looks. There is a table view and when you tap add, the add UI slides up. I have got all this working great so far. My Problem is what happens when the user taps Done. I can add the website to the plist (each website is a dictionary in the plist with 2 keys, atm) But then how do I tell the tableView to update? The table view has not been removed from the main window, just the add view has been added on top of the "screen". Another way of asking is, when you tap Save on Add a Contact screen: (not my image) How does the new contact's data (Xyz's data) get shown on the tableview?

    Read the article

  • ASP.NET MVC2 Data Access Layer

    - by Paul
    For a small/medium sized project I'm trying to figure out what is the 'ideal' way to have a domain layer and data access layer. My opinions on coupling tend to be more towards the view that the domain models should not be tightly coupled with the database layer, in other words the data access layer shouldn't actually know anything about the domain objects. I've been looking at Linq-to-sql and it wants to use its own models that it creates, and so it ends up VERY tightly coupled. Whilst I love the way you use linq-to-sql in code I really don't like the way it wants to make its own domain objects. What are some alternatives that I should consider? I tried use NHibernate but I did not like the way I had to use to query and get different objects. I honestly love the syntax and way you use linq, I just don't want it to be so tightly coupled to domain objects.

    Read the article

  • Core data and special characters (UTF-8)

    - by MW
    I have an iPhone application using Core Data with an SQLite database in the bottom. I'm writing some text content from the database to a file, but special characters such as Å, Ä and Ö are corrupted in the file (they show up just fine in the application). When creating and inserting data, I am not using any special encoding. I'm just taking the NSString (entered by the user in a UITextField) and putting it in my persistent objects. When saving the file, I use the following code: [csvString writeToFile:filePath atomically:YES encoding:NSUTF8StringEncoding error:&error]; I tried adding a BOM to the beginning of the text ("\xef\xbb\xbf") but it is still corrupted. Anyone has any ideas where the problem might be? Examples of corrupted characters: å becomes ö, ä becomes ä

    Read the article

  • What could be causing a "Cannot access a disposed object" error in WCF?

    - by Nima
    I am using the following code: private WSHttpBinding ws; private EndpointAddress Srv_Login_EndPoint; private ChannelFactory<Srv_Login.Srv_ILogin> Srv_LoginChannelFactory; private Srv_Login.Srv_ILogin LoginService; The Login is my constructor: public Login() { InitializeComponent(); ws = new WSHttpBinding(); Srv_Login_EndPoint = new EndpointAddress("http://localhost:2687/Srv_Login.svc"); Srv_LoginChannelFactory = new ChannelFactory<Srv_Login.Srv_ILogin>(ws, Srv_Login_EndPoint); } And I'm using service this way: private void btnEnter_Click(object sender, EventArgs e) { try { LoginService = Srv_LoginChannelFactory.CreateChannel(); Srv_Login.LoginResult res = new Srv_Login.LoginResult(); res = LoginService.IsAuthenticated(txtUserName.Text.Trim(), txtPassword.Text.Trim()); if (res.Status == true) { int Id = int.Parse(res.Result.ToString()); } else { lblMessage.Text = "Not Enter"; } } catch (Exception ex) { MessageBox.Show(ex.Message); } finally { Srv_LoginChannelFactory.Close(); } } When the user enters a valid username and password, everything is fine. When the user enters a wrong username and password, the first try correctly displays a "Not Enter" message, but on the second try, the user sees this message: {System.ObjectDisposedException: Cannot access a disposed object. Object name: 'System.ServiceModel.ChannelFactory`1[Test_Poosesh.Srv_Login.Srv_ILogin]'. at System.ServiceModel.Channels.CommunicationObject.ThrowIfDisposed() at System.ServiceModel.ChannelFactory.EnsureOpened() at System.ServiceModel.ChannelFactory`1.CreateChannel(EndpointAddress address, Uri via) at System.ServiceModel.ChannelFactory`1.CreateChannel() How can I fix my code to prevent this error from occurring?

    Read the article

  • Data structure for Settlers of Catan map?

    - by templatetypedef
    Hello all- A while back someone asked me if I knew of a nice way to encode the information for the game Settlers of Catan. This would require storing a hexagonal grid in a way where each hex can have data associated with it. More importantly, though, I would need some way of efficiently looking up vertices and edges on the sides of these hexagons, since that's where all the action is. My question is this: is there a good, simple data structure for storing a hexagonal grid while allowing for fast lookup of hexagons, edges between hexagons, and vertices at the intersections of hexagons? I know that general structures like a winged-edge or quad-edge could do this, but that seems like massive overkill. Thanks!

    Read the article

  • Advanced SQL Data Compare throught multiple tables

    - by podosta
    Hello, Consider the situation below. Two tables (A & B), in two environments (DEV & TEST), with records in those tables. If you look the content of the tables, you understand that functionnal data are identical. I mean except the PK and FK values, the name Roger is sill connected to Fruit & Vegetable. In DEV environment : Table A 1 Roger 2 Kevin Table B (italic field is FK to table A) 1 1 Fruit 2 1 Vegetable 3 2 Meat In TEST environment : Table A 4 Roger 5 Kevin Table B (italic field is FK to table A) 7 4 Fruit 8 4 Vegetable 9 5 Meat I'm looking for a SQL Data Compare tool which will tell me there is no difference in the above case. Or if there is, it will generate insert & update scripts with the right order (insert first in A then B) Thanks a lot guys, Grégoire

    Read the article

  • How to find out memory layout of your data structure implementation on Linux 64bit machine

    - by ajay
    In this article, http://cacm.acm.org/magazines/2010/7/95061-youre-doing-it-wrong/fulltext the author talks about the memory layouts of 2 data structures - The Binary Heap and the B-Heap and compares how one has better memory layout than the other. http://deliveryimages.acm.org/10.1145/1790000/1785434/figs/f5.jpg http://deliveryimages.acm.org/10.1145/1790000/1785434/figs/f6.jpg I want to get hands on experience on this. I have an implementation of a N-Ary Tree and I want to find out the memory layout of my data structure. What is the best way to come up with a memory layout like the one in the article? Secondly, I think it is easier to identify the memory layout if it is an array based implementation. If the implementation of a Tree uses pointers then what Tools do we have or what kind of approach is required to map it's memory layout? Thanks!

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >