Search Results

Search found 23347 results on 934 pages for 'key storage'.

Page 452/934 | < Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >

  • Problem with sfRemember cookie / sfGuard Remember me

    - by Tom
    I'm using Symfony 1.4 with Doctrine. Sorry if this is a silly question but what exactly does one need to build on top of the sfDoctrineGuardPlugin to get the "remember me" functionality working? When I login a user, the sfRemember cookie is created with the default 15-day lifetime, and the remember key is saved in the plugin's sf_guard_remember_key table. Without any tweaks to the plugin, the sfGuardSecurityUser SignIn() method creates the cookie, but the Signout() method erases it, leaving no cookie unless you're logged in! Signin(): sfContext::getInstance()->getResponse()->setCookie($remember_cookie, $key, time() + $expiration_age); Signout(): sfContext::getInstance()->getResponse()->setCookie($remember_cookie, '', time() - $expiration_age); I can see that the database table saves the cookie as a relation of sf_guard_user, but that's not much good if the cookie is gone.... I'd be grateful if someone could tell me what I'm missing here, and ideally, if I prevent the Signout() method from removing the cookie, do I need to write code to read the cookie myself or is this automated somewhere/somehow? I've got box-standard Symfony 1.4 and sfDoctrineGuardPlugin installations. It all just seems totally wrong and the documentation on this is non-existent. Any help would appreciated.

    Read the article

  • ML 350 additional SATA RAID controller (mirror only)

    - by Nicholas
    I have a Proliant ML350 G8 with two SAS raid arrays currently set up - thereby maxing the default P420i raid controller. I need to set up a large video dump space in addition to this existing set-up (for non backed up, non-critical, temporary storage). I had planned to just add a 2TB SATA disk and plug it into the motherboard. However I it occurred to me the motherboard might have built-in mirror RAID support? Therefore I could use two SATA disks and have some semblance of redundancy. Is this possible? Or would I need to get a cheap raid card? Any recommendations?

    Read the article

  • python: how to design a container with elements that must reference their container

    - by Luke404
    (the title is admittedly not that great. Please forgive my English, this is the best I could think of) I'm writing a python script that will manage email domains and their accounts, and I'm also a newby at OOP design. My two (related?) issues are: the Domain class must do special work to add and remove accounts, like adding/removing them to the underlying implementation how to manage operations on accounts that must go through their container To solve the former issue I'd add a factory method to the Domain class that'll build an Account instance in that domain, and a 'remove' (anti-factory?) method to handle deletions. For the latter this seems to me "anti-oop" since what would logically be an operation on an Account (eg, change password) must always reference the containing Domain. Seems to me that I must add to the Account a reference back to the Domain and use that to get data (like the domain name) or call methods on the Domain class. Code example (element uses data from the container) that manages an underlying Vpopmail system: class Account: def __init__(self, name, password, domain): self.name = name self.password = password self.domain = domain def set_password(self, password): os.system('vpasswd %s@%s %s' % (self.name, self.domain.name, password) self.password = password class Domain: def __init__(self, domain_name): self.name = domain_name self.accounts = {} def create_account(self, name, password): os.system('vadduser %s@%s %s' % (name, self.name, password)) account = Account(name, password, self) self.accounts[name] = account def delete_account(self, name): os.system('vdeluser %s@%s' % (name, self.name)) del self.accounts[name] another option would be for Account.set_password to call a Domain method that would do the actual work - sounds equally ugly to me. Also note the duplication of data (account name also as dict key), it sounds logical (account names are "primary key" inside a domain) but accounts need to know their own name.

    Read the article

  • rsync to windows (cygwin)

    - by abergmeier
    We have a windows file storage (don't ask) and now I want to rsync with the machine from Windows, Mac and Linux. So I installed freeSSHd (login shell is set to C:/cygwin64/bin/sh.exe), set up certificates and testing from Linux the test.dat has 0 bytes: ssh myuser@winmachinename "C:/cygwin64/bin/true.exe" > test.dat Even double checking with actual output works fine: ssh myuser@winmachinename "C:/cygwin64/bin/ls.exe" > test.dat Now, when I call rsync: rsync --progress -avz -e ssh myuser@winmachinename:/c/Users ~/test it fails with: protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(174) [Receiver=3.1.0] As far as reading the docs, this should not happen, when the first test is successful!? I am by now out of ideas - any recommendations how to debug this? EDIT: | OS | rsync version | |:--------------|:------------------------------------------| | Windows | rsync version 3.0.9 protocol version 30 | | Linux | rsync version 3.1.0 protocol version 31 |

    Read the article

  • NHibernate with nothing but stored procedures

    - by ChrisB2010
    I'd like to have NHibernate call a stored procedure when ISession.Get is called to fetch an entity by its key instead of using dynamic SQL. We have been using NHibernate and allowing it to generate our SQL for queries and inserts/updates/deletes, but now may have to deploy our application to an environment that requires us to use stored procedures for all database access. We can use sql-insert, sql-update, and sql-delete in our .hbm.xml mapping files for inserts/updates/deletes. Our hql and criteria queries will have to be replaced with stored procedure calls. However, I have not figured out how to force NHibernate to use a custom stored procedure to fetch an entity by its key. I still want to be able to call ISession.Get, as in: using (ISession session = MySessionFactory.OpenSession()) { return session.Get<Customer>(customerId); } and also lazy load objects, but I want NHibernate to call my "GetCustomerById" stored procedure instead of generating the dynamic SQL. Can this be done? Perhaps NHibernate is no longer a fit given this new environment we must support.

    Read the article

  • PHP cURL JSON Decode (X-AUTH Header)

    - by TheCyX
    <?php // Show Profile $curl = curl_init(); curl_setopt ($curl, CURLOPT_URL, "https://example/api"); curl_setopt ($curl, CURLOPT_RETURNTRANSFER, true); curl_setopt ($curl, CURLOPT_HTTPAUTH, CURLAUTH_BASIC ) ; curl_setopt ($curl, CURLOPT_HTTPHEADER, array('X-AUTH: 123456789')); $projects = curl_exec($curl); // This is empty? echo $projects; //Decode $phpArray = json_decode($projects); print_r($phpArray); foreach ($phpArray as $key => $value) { // Line 17, sure its empty, but why? echo "<p>$key | $value</p>"; } ?> Warning: Invalid argument supplied for foreach() in /html/api.php on line 17 The API needs this authentification: $ curl -i -H "X-AUTH: 123456789" https://example/api JSON File: {"id":"123456","hostId":null,"Nickname":"thecyx","DisplayName":"thecyx","AppDisplayName":"thecyx","Score":"300","Account":"Full"} The $project variable is empty. If I'm posting the API Url in the Broswer its working. And, if possible, what's the correct way to get the JSON Data e.g. [Nickname],[Score]?

    Read the article

  • Advanced file compression software for Mac OSX

    - by Steven Roose
    Back when I used Windows, I always used WinRAR for file compression and decompression. It had a fair amount of options like 'just storage' vs 'hard compression', password protection and archive type. Now that I use Mac OSX, the only compression possibility I have is the default Finder's Compress to Zip. I downloaded the most popular decompression software "Unarchiver". But this app can't compress other archive types either. I went for a search but there seem to be hardly any good advanced compression tools that work nice in OSX and have the options WinRAR has. (WinRAR works in OSX but command line only, I'm looking for something with a GUI.) Any ideas? I strongly prefer freeware. I found Archiver and StiffIt, but they are both commercial.

    Read the article

  • How to change default permission for uploaded files in apache with mounted webroot?

    - by faridv
    I have an ubuntu server 11.10 with apache 2.2.20, php 5.3.6 and an installation of Joomla cms. I have used an extra hard disk as my web server storage and mounted it into /data/www/ (I hope it's not where my problem us!). I've set permission of all files and folders in my web root to 755 and user groups for them is set to [default ubuntu user(in my case radio)]:www-data. In past days I had serious problems with joomla not showing new uploaded images and other files and also I can't install any extensions. After hours of searching I found out that uploaded files don't have appropriate permission (they are -rw-------) and Joomla application cannot read, copy or move them after upload. I’m wondering how can I set a default permission so all files that I upload use it? PS: I’ve tested umask but it did nothing. I think it has nothing to do with my problem.

    Read the article

  • MS SQL dts to ssis migration error

    - by Manjot
    Hi, I have migrated some DTS packages to SSIS 2005 using "Migration" wizard. When I tried to run it, it fails saying you need a higher version of SSIS even though the destination SSIS server is on 9.0.4211 level. then I digged in the package using business intelligence studio and saw that one of the package subtasks is "Transform data task" (the dts version) and the package fails to run that. The storage location for this dts task is set to "Embedded in Task". I didn't touch it. why didn't it convert this task to an SSIS data flow task? any help please? Thansk in advance

    Read the article

  • Using the groupby method in Python, example included

    - by randombits
    Trying to work with groupby so that I can group together files that were created on the same day. When I say same day in this case, I mean the dd part in mm/dd/yyyy. So if a file was created on March 1 and April 1, they should be grouped together because the "1" matches. Here's the code I have so far: #!/usr/bin/python import os import datetime from itertools import groupby def created_ymd(fn): ts = os.stat(fn).st_ctime dt = datetime.date.fromtimestamp(ts) return dt.year, dt.month, dt.day def get_files(): files = [] for f in os.listdir(os.getcwd()): if not os.path.isfile(f): continue y,m,d = created_ymd(f) files.append((f, d)) return files files = get_files() for key, group in groupby(files, lambda x: x[1]): for file in group: print "file: %s, date: %s" % (file[0], key) print " " The problem is, I get lots of files that get grouped together based on the day. But then I'll see multiple groups with the same day. Meaning I might have 4 files grouped that were created on the 17th. Later on I'll see another unique set of 2 files that are also created on the 17th. Where am I going wrong?

    Read the article

  • how can load images from plist in to UITableView ?

    - by srikanth rongali
    I have stored the videos and the thumbnail images of the videos in Documents folder. And I have given the path in plist. In plist I took an array and I added directories to the array. And in the dictionary I stored the image path /Users/srikanth/Library/Application Support/iPhone Simulator/User/Applications/9E6E8B22-C946-4442-9209-75BB0E924434/Documents/image1 for key imagePath. for video /Users/srikanth/Library/Application Support/iPhone Simulator/User/Applications/9E6E8B22-C946-4442-9209-75BB0E924434/Documents/video1.mp4 for key filePath. I used following code but it is not working. I am trying only for images. I need the images to be loaded in the table in each cell. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"Cell"; UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil) { cell = [[[UITableViewCell alloc] initWithFrame:CGRectZero reuseIdentifier:CellIdentifier] autorelease]; [cell setAccessoryType:UITableViewCellAccessoryDetailDisclosureButton]; UIImageView *image2 = [[UIImageView alloc]init]; image2.frame = CGRectMake(0.0f, 0.0f, 80.0f, 80.0f); image2.backgroundColor = [UIColor clearColor]; //image2.image = [UIImage imageNamed:@"snook.png"]; image2.tag = tag7; } NSDictionary *dictOfplist = [cells objectAtIndex:indexPath.row]; [(UIImageView *)[cell viewWithTag:tag7] setImage:[dictOfplist objectForKey:@"imagePath"]]; return cell; } - (void)viewDidLoad { [super viewDidLoad]; self.title = @"Library"; self.navigationItem.rightBarButtonItem = [[UIBarButtonItem alloc] initWithTitle:@"Close" style:UIBarButtonItemStyleBordered target:self action:@selector(close:)]; NSString* plistPath = [[NSBundle mainBundle] pathForResource:@"details" ofType:@"plist"]; contentArray = [NSArray arrayWithContentsOfFile:plistPath]; cells = [[NSMutableArray alloc]initWithCapacity:[contentArray count]]; for(dCount = 0; dCount < [contentArray count]; dCount++) [cells addObject:[contentArray objectAtIndex:dCount]]; } How can I make this work. Thank you.

    Read the article

  • NFS-Root not working when booting over PXE

    - by Randy
    I am desperately trying to get a diskless client running over PXE-Boot using a NFS-Share as a root file system. I did this before some years ago but for some reason I am stucked at this since days. The TFTP-Server itself is running fine and booting a netinstaller works also fine. The kernel and initrd are loaded also but the bootprocess stops with this (screenshot) kernel panic. I'm using the squeeze standard i386-Kernel and I have prepared the initrd with this config: MODULES=most BUSYBOX=y KEYMAP=n COMPRESS=gzip BOOT=nfs DEVICE= NFSROOT=auto I also tried MODULES=netboot with the same outcome. My PXE-configuration looks like this: LABEL linux KERNEL diskless/debian-default/vmlinuz-2.6.32-5-686 APPEND root=/dev/nfs initrd=diskless/debian-default/vmlinuz-2.6.32-5-686 nfsroot=192.168.140.2:/storage/nfs-boot-images/default-squeeze ip=dhcp rw Furthermore I have captured the network communication of the client via tcpdump and learned that the client isn't even trying to connect to the NFS-share. Does anybody has got an idea what is going wrong here?

    Read the article

  • text-area-text-to-be-split-with-conditions repeated

    - by desmiserables
    I have a text area wherein i have limited the user from entering more that 15 characters in one line as I want to get the free flow text separated into substrings of max limit 15 characters and assign each line an order number. This is what I was doing in my java class: int interval = 15; items = new ArrayList(); TextItem item = null; for (int i = 0; i < text.length(); i = i + interval) { item = new TextItem (); item.setOrder(i); if (i + interval < text.length()) { item.setSubText(text.substring(i, i + interval)); items.add(item); } else { item.setSubText(text.substring(i)); items.add(item); } } Now it works properly unless the user presses the enter key. Whenever the user presses the enter key I want to make that line as a new item having only that part as the subText. I can check whether my text.substring(i, i + interval) contains any "\n" and split till there but the problem is to get the remaining characters after "\n" till next 15 or till next "\n" and set proper order and subText.

    Read the article

  • New Perl user: using a hash of arrays

    - by Zach H
    I'm doing a little datamining project where a perl script grabs info from a SQL database and parses it. The data consists of several timestamps. I want to find how many of a particular type of timestamp exist on any particular day. Unfortunately, this is my first perl script, and the nature of perl when it comes to hashes and arrays is confusing me quite a bit. Code segment: my %values=();#A hash of the total values of each type of data of each day. #The key is the day, and each key stores an array of each of the values I need. my @proposal; #[drafted timestamp(0), submitted timestamp(1), attny approved timestamp(2),Organiziation approved timestamp(3), Other approval timestamp(4), Approved Timestamp(5)] while(@proposal=$sqlresults->fetchrow_array()){ #TODO: check to make sure proposal is valid #Increment the number of timestamps of each type on each particular date my $i; for($i=0;$i<=5;$i++) $values{$proposal[$i]}[$i]++; #Update rolling average of daily #TODO: To check total load, increment total load on all dates between attourney approve date and accepted date for($i=$proposal[1];$i<=$proposal[2];$i++) $values{$i}[6]++; } I keep getting syntax errors inside the for loops incrementing values. Also, considering that I'm using strict and warnings, will Perl auto-create arrays of the right values when I'm accessing them inside the hash, or will I get out-of bounds errors everywhere? Thanks for any help, Zach

    Read the article

  • Windows 2008 DHCP service fails - "...failed to see a directory server for authorization."

    - by ewwhite
    I have a small environment running Windows 2008 R2 where the DHCP service on the domain controller fails every two weeks. The most-visible error is Event ID 1059 and the Event Viewer message is: "The DHCP service failed to see a directory server for authorization." The setup features two domain controller and the usual services and roles (file, print, Exchange). Restarting the service fails for a variety of reasons. I've had the following messages at different times: "Not enough storage is available to complete this operation". "Unable to determine the DHCP Server version for the Server 192.168.x.x" "The DHCP service has detected that it is running on a DC and has no credentials configured for use with Dynamic DNS registrations initiated by the DHCP service." A reboot of the domain controller resolves the issue for ~2 weeks. The systems are virtualized and there are no network connectivity issues. Any ideas what's happening here?

    Read the article

  • wix The directory is in the user profile but is not listed in the RemoveFile table

    - by Venkat S. Rao
    I have the following configuration to delete and copy a file from WIX. <Directory Id='TARGETDIR' Name='SourceDir'> ... <Directory Id="AppDataFolder" Name="AppDataFolder"> <Directory Id="GleasonAppData" Name="Gleason" > <Directory Id="GleasonStudioAppData" Name="GleasonStudio"> <Directory Id="DatabaseAppData" Name ="Database"> <Directory Id="UserSandboxesAppData" Name="UserSandboxes" /> </Directory> </Directory> </Directory> </Directory> </Directory> <DirectoryRef Id="UserSandboxesAppData"> <Component Id="comp_deleteBackup" Guid="1f159f49-3029-4f46-b194-e42aabd40844"> <RemoveFile Id="RemoveBackup" Directory="UserSandboxesAppData" Name="DevelopmentBackUp.FDB" On="install" /> <RegistryKey Root="HKCU" Key="Software\Gleason\Database\RemoveBackup"> <RegistryValue Value="Removed" Type="string" KeyPath="yes" /> </RegistryKey> </Component> <Component Id="comp_createBackup" Guid="557badef-6d77-4c4e-aa5f-8d88cb5ef735"> <CopyFile Id="DBBackup" DestinationDirectory="UserSandboxesAppData" DestinationName="DevelopmentBackUp.FDB" SourceDirectory="UserSandboxesAppData" SourceName="Development.FDB" /> <RegistryKey Root="HKCU" Key="Software\Gleason\Database\CopyBackup"> <RegistryValue Value="Copied" Type="string" KeyPath="yes" /> </RegistryKey> </Component> </DirectoryRef> I get 4 errors related to ICE64--The directory 'xxx' is in the user profile but is not listed in the RemoveFile table. xxx={UserSandboxesAppData, DatabaseAppData, GleasonStudioAppData, GleasonAppData} Someone else had a very similar problem here: Directory xx is in the user profile but is not listed in the RemoveFile table. . But that solution did not help me. What do I need to change? Thank You, Venkat Rao

    Read the article

  • How do I know if my SSD Drive supports TRIM?

    - by Omar Shahine
    Windows 7 has support for the TRIM command which should help ensure that the performance of an SSD drive remains good through it's life. How can you tell if a given SSD drive supports TRIM? See here for a description of TRIM. Also the following from a Microsoft presentation: Microsoft implementation of “Trim” feature is supported in Windows 7 NTFS will send down delete notification to the device supporting “trim” File system operations: Format, Delete, Truncate, Compression OS internal processes: e.g., Snapshot, Volume Manager Three optimization opportunities for the device Enhancing device wear leveling by eliminating merge operation for all deleted data blocks Making early garbage collection possible for fast write Keeping device’s unused storage area as much as possible; more room for device wear leveling.

    Read the article

  • C# .NET: Descending comparison of a SortedDictionary?

    - by Rosarch
    I'm want a IDictionary<float, foo> that returns the larges values of the key first. private IDictionary<float, foo> layers = new SortedDictionary<float, foo>(new DescendingComparer<float>()); class DescendingComparer<T> : IComparer<T> where T : IComparable<T> { public int Compare(T x, T y) { return -y.CompareTo(x); } } However, this returns values in order of the smallest first. I feel like I'm making a stupid mistake here. Just to see what would happen, I removed the - sign from the comparator: public int Compare(T x, T y) { return y.CompareTo(x); } But I got the same result. This reinforces my intuition that I'm making a stupid error. This is the code that accesses the dictionary: foreach (KeyValuePair<float, foo> kv in sortedLayers) { // ... } UPDATE: This works, but is too slow to call as frequently as I need to call this method: IOrderedEnumerable<KeyValuePair<float, foo>> sortedLayers = layers.OrderByDescending(kv => kv.Key); foreach (KeyValuePair<float, ICollection<IGameObjectController>> kv in sortedLayers) { // ... } UPDATE: I put a break point in the comparator that never gets hit as I add and remove kv pairs from the dictionary. What could this mean?

    Read the article

  • Alternative Python standard library reference

    - by Ender
    I love Python; I absolutely despise its official documentation. Tutorials do not count as library references, but that appears to be what they're attempting. What I really want is the ability to find a class in the standard library and view documentation for all of its properties and methods. Actionscript, MSDN, and Java all do this just fine (although each with their odd quirks). Where is this for python? For example, I wanted to sort a list. mylist.sort(). Awesome. But what if I wanted it sorted in descending order? Official documentation is not - much - help. Or what if I wanted to specify a key function? That's also supported: mylist.sort(key=lamba item: item.customVar)- but documented...where? I understand that Python's approach to OOP may not be equivalent to Java et. al. Maybe list isn't actually a class - maybe it's just a function that returns an iterable when the tachyon beams are set to glorious and the unboxed hyper enumeration is quantized, but...I don't care. I just want to know how to sort lists. (Apologies for the angst - too much caffeine today)

    Read the article

  • Is it practical to program with your feet?

    - by bmm
    Has anyone tried using foot pedals in addition to the traditional keyboard and mouse combo to improve your effectiveness in the editor? Any actual experiences out there? Does it work, or is it just for carpal tunnel relief? I found one blog entry from a programmer who actually tried it: So now I can type using my feet for most of the modifier keys. I am using the pedals as I type this. I am still getting used to them, but the burning in my left wrist has definitely reduced. I think I can also type a little faster, but I am too lazy to do the speed tests with and without the pedals to verify this. On the negative side: Working out where to put your feet when you aren’t typing can be a little awkward. The pedals tend to move around the carpet, despite being metal and quite heavy. Some small spikes might have helped. Although the travel on the pedals is small, they are surprisingly stiff. Another programmer's experience: Anybody with hand pain must get foot pedals, since they can remove a tremendous load from your hands. I have two foot pedals, and use one for the SHIFT key, and the other for the CONTROL key. (I still type META by hand.) I have found that in the process of using the Emacs text editor to compose computer programs, I tend to use the SHIFT, CONTROL and META keys constantly, and it is easy to remove most of this load from one's hands. Some foot switch products: Savant Elite Triple Foot Switch FragPedal Bilbo Step On It!

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • Getting error while connecting to facebook

    - by Bakhtiyor
    I have downloaded php-sdk for using facebook in php. I also created facebook application in the facebook web page in order to get API key and secret key. When I run example.php and try to connect to facebook it shows me the following error: Configuration errors: To fix this error, please set your Connect URL in the application settings editor. Once it has been set, users will be redirected to that URL instead of this page after logging in. I am using this php-sdk in the localhost. I tried to assign localhost as Connect URL but facebook doesn't accept it. I have read about Cross Domain Communication Channel and think this is what I need. But I don't know how to use it. Anyone can help me to solve this problem please? Update Actually what I need is following. I have Web Application and need to connect to facebook and search for users in facebook who has specific(user will specify this) Likes and Interests. Any idea about solving my problem?

    Read the article

  • Force Oracle error on fetch

    - by Dan
    I am trying to debug a strange behavior in my application. In order to do so, I need to reproduce a scenario where an SQL SELECT query will throw an error, but only while actually fetching from the cursor, not while executing the query itself. Can this be done? Any error will do, but ORA-01722: invalid number seems like the obvious one to try. I created a table with the follwing: KEYCOL INTEGER PRIMARY KEY OTHERCOL VARCHAR2(100) I then created a few hundred rows with unique values for the primary key and the value l for the othercol. I then ran a SELECT * query, picked a row somewhere in the middle, and updated it to the string abcd. I ran the query SELECT KEYCOL, TO_NUMBER(OTHERCOL) FROM SOMETABLE hoping to get some rows of good data an then an error later. But I keep getting ORA-01722: invalid number on the execute step itself. I have gotten this behavior programmatically using ADO (with server-side cursor) and JDBC, as well as from PL/SQL Developer. How can I get the result I'm looking for? thanks Edit - meant to add, when using ADO, I am only calling Command.Execute. I am not creating or opening a Recordset.

    Read the article

  • How can I configure a Linksys EA4500 + usb printer for network printing (without connect cloud)

    - by Larry Kyrala
    The documentation and classic firmware (2.0.37) for Cisco's Linksys EA4500 is a bit sparse on setup details. It says I can connect a USB-printer, but then goes on to try to sell "Connect Cloud" remote management software. I don't want that. I just want to know how to set this up with the existing advanced firmware. Is it possible? AFAIK, to setup a IPP or LDP printer, there is usually some kind of queue configuration on the server (i.e. the ea4500 in this case), but I can't find it in the firmware. I also have been unable to find any existing protocols from win7 or mac osx. (windows network share, IPP/LDP etc.) I'm curious if I need to have the "Storage" accounts active and connect to my router either via the local IP or router name. There's a lot of unknowns here; it would help to know how this particular router actually works.

    Read the article

  • How Can I Generate Equivalent Output Using the CryptoAPI and the .NET Encryption (TripleDESCryptoSer

    - by S. Butts
    I have some C#/.NET code that encrypts and decrypts data using TripleDES encryption. It sticks to the sample code provided at MSDN pretty closely. The encryption piece looks like the following: TripleDESCryptoServiceProvider _desProvider = new TripleDESCryptoServiceProvider(); //bytes for key and initialization vector //keyBytes is 24 bytes of stuff, vectorBytes is 8 bytes of stuff byte[] keyBytes; byte[] vectorBytes; FileStream fStream = File.Open(locationOfFile, FileMode.Create, FileAccess.Write); CryptoStream cStream = new CryptoStream(fStream, _desProvider.CreateEncryptor(keyBytes, vectorBytes), CryptoStreamMode.Write); BinaryWriter bWriter = new BinaryWriter(cStream); //write out encrypted data //raw data is a few bytes of binary information byte[] rawData; bWriter.Write(rawData); With encrypting and decrypting in C#, this all works like a charm. The problem is I need to write a small Win32 utility that will duplicate the encryption above. I have tried several methods using the CryptoAPI, and I simply do not get output that the .NET piece can decrypt, no matter what I do. Can someone please tell me what the equivalent C++ code is that will produce the same output? I am not certain just what methods of the CryptoAPI the .NET functions use to encrypt the data. What options are used, and what method of generating the key is used? Before someone suggests that I just write it in C# anyway, or create some common library bridge for them, those options are unfortunately off the table. It really has to work in Win32 with .NET and without using a DLL. I have some leeway in changing the C# code. I apologize in advance if this is bone-headed, as I am new to encryption.

    Read the article

< Previous Page | 448 449 450 451 452 453 454 455 456 457 458 459  | Next Page >