Search Results

Search found 10433 results on 418 pages for 'session replication'.

Page 361/418 | < Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >

  • Cisco 7206vxr cpu reducing

    - by naimson
    I have a 7206VXR (NPE-G2) . At the rate of 140 kpps i gain 80% of cpu . So i looking for ways how to reduce it? So i want to turn off netflow(but don't want to this,monitoring is highly important for me), but it will give me only 10-20% ? At this moment with 84kpps i have 58% sh processes cpu sorted give me this. PID Runtime(ms) Invoked uSecs 5Sec 1Min 5Min TTY Process 109 163534600 537236763 304 35.38% 32.83% 16.85% 0 IP Input 67 829396 52280 15864 0.15% 0.01% 0.00% 0 Per-minute Jobs 68 5542736 3053476 1815 0.15% 0.18% 0.16% 0 Per-Second Jobs 51 635852 1116315 569 0.07% 0.03% 0.02% 0 Net Background 329 120396 4607274 26 0.07% 0.00% 0.00% 0 EIGRP-IPv4 Hello 105 50508 95032488 0 0.07% 0.05% 0.05% 0 IPAM Manager 6 4068580 476916 8531 0.00% 0.07% 0.05% 0 Check heaps 7 7768 3634 2137 0.00% 0.00% 0.00% 0 Pool Manager 8 0 1 0 0.00% 0.00% 0.00% 0 DiscardQ Backgro 10 8 708 11 0.00% 0.00% 0.00% 0 WATCH_AFS 5 0 1 0 0.00% 0.00% 0.00% 0 RO Notify Timers 12 0 2 0 0.00% 0.00% 0.00% 0 ATM VC Auto Crea 9 0 2 0 0.00% 0.00% 0.00% 0 Timers 11 0 2 0 0.00% 0.00% 0.00% 0 ATM AutoVC Perio 13 296 610532 0 0.00% 0.00% 0.00% 0 IPC Event Notifi 16 0 1 0 0.00% 0.00% 0.00% 0 IPC Zone Manager 17 3584 2980311 1 0.00% 0.00% 0.00% 0 IPC Periodic Tim 4 0 1 0 0.00% 0.00% 0.00% 0 EDDRI_MAIN 19 0 1 0 0.00% 0.00% 0.00% 0 IPC Process leve 20 0 1 0 0.00% 0.00% 0.00% 0 IPC Seat Manager 21 96 174453 0 0.00% 0.00% 0.00% 0 IPC Check Queue 14 4 50890 0 0.00% 0.00% 0.00% 0 IPC Dynamic Cach 3 0 1 0 0.00% 0.00% 0.00% 0 cpf_process_tpQ 24 756 305371 2 0.00% 0.00% 0.00% 0 IPC Keep Alive M 25 2340 610561 3 0.00% 0.00% 0.00% 0 IPC Loadometer 22 0 1 0 0.00% 0.00% 0.00% 0 IPC Seat RX Cont 15 0 1 0 0.00% 0.00% 0.00% 0 IPC Session Serv 18 1620 2980310 0 0.00% 0.00% 0.00% 0 IPC Deferred Por 29 0 1 0 0.00% 0.00% 0.00% 0 Exception contro sh run(greped): http://pastie.org/5483194 Hardware: c7200p-adventerprisek9-mz.151-4.M1.bin Cisco 7206VXR (NPE-G2) processor (revision A) with 917504K/65536K bytes of memory. Processor board ID 2xxxxxxx MPC7448 CPU at 1666Mhz, Implementation 0, Rev 2.2 6 slot VXR midplane, Version 2.1

    Read the article

  • Cannot Install Windows 7 SP1 (64-bit)

    - by Clever Human
    I have tried every way I know how to get Windows 7 SP1 to install. It fails every time. Below is what looks like the relevant contents of the CBS.Log file. If there are further details that would help or more information I can gather, I will get it. 2011-08-15 10:32:52, Info CBS Startup: Package: Package_for_KB976902~31bf3856ad364e35~amd64~~6.1.1.17514 completed startup processing, new state: Installed, original: Installed, targeted: Installed. hr = 0x80070490 2011-08-15 10:32:52, Info CBS WER: Generating failure report for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, status: 0x80070490, failure source: CBS Other, start state: Partially Installed, target state: Installed, client id: SP Coordinater Engine 2011-08-15 10:32:52, Info CBS Failed to query DisableWerReporting flag. Assuming not set... [HRESULT = 0x80070002 - ERROR_FILE_NOT_FOUND] 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS Failed to add %windir%\winsxs\pending.xml.bad to WER report because it is missing. Continuing without it... 2011-08-15 10:32:52, Info CBS SQM: Reporting package change completion for package: Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, current: Partially Installed, original: Partially Installed, target: Installed, status: 0x80070490, failure source: CBS Other, failure details: "(null)", client id: SP Coordinater Engine, initiated offline: False, execution sequence: 517, first merged sequence: 517 2011-08-15 10:32:52, Info CBS SQM: Upload requested for report: PackageChangeEnd_Package_for_KB976932~31bf3856ad364e35~amd64~~6.1.1.17514, session id: 101457924, sample type: Standard 2011-08-15 10:32:52, Info CBS SQM: Ignoring upload request because the sample type is not enabled: Standard I have downloaded the service pack and ran it from the EXE, I have installed it from Windows Update, I have ran all the "troubleshooting" trouble shots I could find. Nothing has worked so far. Any advice would be appreciated.

    Read the article

  • Load Balancing is unusual in Apache in round-robin mode when one of the tomcat is brought down

    - by srayker
    We are facing a unusual behavior with round-robin load balancing on apache when one of the tomcat server is brought down. Our Setup: we have 2 apache web servers on the front end using mod_jk module for load balancing using round robin for load distribution. We also have enabled session stickyness. This is followed by 4 tomcat servers on which the applications are running. Sometimes under heavy load, if there is a slowness in our DB tier we find that eventually one of the tomcat goes into a hung state and would need a restart. The moment we bounce the tomcat we see a spike in requests in one of the server and this would also go into hung state and need a restart. Eventually all the server will face similar condition. What beats me is why does the Apache transfer the whole load to one server instead of distributing the load. We are now trying the worker.balancer.method=B to see if this helps to resolve our issue. Any thoughts on resolving the issue is greatly appreciated. regards, srayker

    Read the article

  • (squid): failed to find or read error text file.

    - by adam
    There is something in our ERR_NO_RELAY that is causing this error to be logged and for the squid process to fail on start up. I can't show you the exact content of the file but I can tell you It has several lines of JavaScript When we remove the JavaScript, the problem goes away. This same file does not cause any issues other 3 instances of squid that we have running internally. All instances of squid came from the same VM images so they should be the same. We are unable to reproduce this issue except on the one box and we are unable to debug more on this box right now because it is running in production. I know these files are interpreted so squid can replace certain values available in the session so it may be that a syntax error caused this issue. That does not explain why we cannot reproduce it on other (virtually the same) images. One difference is that the instance of squid that has the issue was under load when the issue occurred. Any suggestions/insight would be appreciated. thanks!

    Read the article

  • TGT validation fails, but only for one user

    - by wzzrd
    I'm seeing the weirdest thing here. I have a couple of RHEL3, 4 and 5 machines that validate user credentials through Kerberos with an Active Directoy domain controller as their KDC. This works for all of my users, save one. There is one account that is unable to log into RHEL3 Linux machines and generates the following errors there: May 31 13:53:19 mybox sshd(pam_unix)[7186]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=user May 31 13:53:20 mybox sshd[7186]: pam_krb5: TGT verification failed for `user' May 31 13:53:20 mybox sshd[7186]: pam_krb5: authentication fails for `user' Other accounts, like my own, are fine: May 31 17:25:30 mybox sshd(pam_unix)[12913]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=10.0.0.1 user=myuser May 31 17:25:31 mybox sshd[12913]: pam_krb5: TGT for myuser successfully verified May 31 17:25:31 mybox sshd[12913]: pam_krb5: authentication succeeds for `myuser' May 31 17:25:31 mybox sshd(pam_unix)[12915]: session opened for user myuser by (uid=0) As you can see, TGT validation fails. This only happens for this specific account, not for any other. The failing useraccount's password has been reset, I inspected both user objects in Active Directory, but I see nothing out of the ordinary. If I have the failing useraccount log into a RHEL4 or 5 box, there is not problem, so it must be RHEL3 specific, but the fact that only one account suffers from this, alludes me. Maybe someone has seen this before?

    Read the article

  • PPTP VPN on Server 2008 Enterprise

    - by Mike K
    I asked this question on Server fault and was told that was not allowed so im moving it here. I am running Windows Server 2008 enterprise in my HOME network inside of vmware workstation. I am running this on my home network to setup a PPTP VPN connection at home. I have correctly setup everything I needed to make it work, including opening all the ports, 1723 and 43 (GRE). I am able to connect just fine, but when I connect I dont have internet unless I uncheck use remote gateway. The thing is, I want to use the remote gateway to route all my traffic through that connection. Can someone tell me why this isnt working and how to get it to work. When I have remote gateway checked, and I do an ipconfig I dont get a remote gateway for the VPN connection, its 0.0.0.0 when id assume if connected properly should be 192.168.1.254 (my ATT Home Router). Also, if I cant get the remote gateway issue to work, and I have to uncheck that box to get internet, does this mean my VPN session is no longer encrypted? I am fully aware the PPTP VPN is the weakest VPN encryption out there but still having that extra layer of security when im on an unsecure wifi connection makes me feel a bit better. Thank you for all your help in advance. Someone told me I need to setup a gateway or router configured on the server. If thats the case, how go I go about telling the remote co

    Read the article

  • File gone or altered after MySQL[HY000][2002] error [on hold]

    - by Psyberion
    I'm working on a rather small project, and today I got an SQLSTATE[HY000][2002]:Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' error. After a bit of googling and a few attempts to restart the mysqld service, I gave up and tried rebooting the computer. This did the trick, MySQL was now running fine. I did, however, get a more serious issue: Some files were missing, others were altered. Also, a few posts in the MySQL was gone. It's really strange, it's like the whole project has been reset two or three days, and I have no clue why. Some additional details about this: I save my files after every line of code. I'm religious about this. So I haven't lost the files that way. I was accessing the server via SSH when the error occurred, so I did the programming and the reboot over SSH. The server is a Raspberry Pi, model B, with Raspian on which I run Apache2. I was viewing the site and had an active session when I rebooted the system. The pages I lost did work just before this all happened. The MySQL fault occurred when I tried to add a text NOT NULL column to a table which had entries. Now, the amount of lost work isn't really that much, so I'm not really looking for help recovering the files. The reason I'm posting this is because I wonder how did this happen, and why?

    Read the article

  • Hibernate Exception, what wrong ? [[Exception in thread "main" org.hibernate.InvalidMappingException

    - by user195970
    I use netbean 6.7.1 to write "hello world" witch hibernate, but I get some errors, plz help me, thank you very much. my exception init: deps-module-jar: deps-ear-jar: deps-jar: Copying 1 file to F:\Documents and Settings\My Dropbox\DropboxNetBeanProjects\loginspring\build\web\WEB-INF\classes compile-single: run-main: Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Environment <clinit> INFO: Hibernate 3.2.5 Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Environment <clinit> INFO: hibernate.properties not found Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Environment buildBytecodeProvider INFO: Bytecode provider name : cglib Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Environment <clinit> INFO: using JDK 1.4 java.sql.Timestamp handling Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Configuration configure INFO: configuring from resource: /hibernate.cfg.xml Oct 25, 2009 2:44:05 AM org.hibernate.cfg.Configuration getConfigurationInputStream INFO: Configuration resource: /hibernate.cfg.xml Oct 25, 2009 2:44:06 AM org.hibernate.cfg.Configuration addResource INFO: Reading mappings from resource : hibernate/Tbluser.hbm.xml Oct 25, 2009 2:44:06 AM org.hibernate.util.XMLHelper$ErrorLogger error SEVERE: Error parsing XML: XML InputStream(1) Document is invalid: no grammar found. Oct 25, 2009 2:44:06 AM org.hibernate.util.XMLHelper$ErrorLogger error SEVERE: Error parsing XML: XML InputStream(1) Document root element "hibernate-mapping", must match DOCTYPE root "null". Exception in thread "main" org.hibernate.InvalidMappingException: Could not parse mapping document from resource hibernate/Tbluser.hbm.xml at org.hibernate.cfg.Configuration.addResource(Configuration.java:569) at org.hibernate.cfg.Configuration.parseMappingElement(Configuration.java:1587) at org.hibernate.cfg.Configuration.parseSessionFactory(Configuration.java:1555) at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1534) at org.hibernate.cfg.Configuration.doConfigure(Configuration.java:1508) at org.hibernate.cfg.Configuration.configure(Configuration.java:1428) at org.hibernate.cfg.Configuration.configure(Configuration.java:1414) at hibernate.CreateTest.main(CreateTest.java:22) Caused by: org.hibernate.InvalidMappingException: Could not parse mapping document from invalid mapping at org.hibernate.cfg.Configuration.addInputStream(Configuration.java:502) at org.hibernate.cfg.Configuration.addResource(Configuration.java:566) ... 7 more Caused by: org.xml.sax.SAXParseException: Document is invalid: no grammar found. at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.createSAXParseException(ErrorHandlerWrapper.java:195) at com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper.error(ErrorHandlerWrapper.java:131) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:384) at com.sun.org.apache.xerces.internal.impl.XMLErrorReporter.reportError(XMLErrorReporter.java:318) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.scanStartElement(XMLNSDocumentScannerImpl.java:250) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl$NSContentDriver.scanRootElementHook(XMLNSDocumentScannerImpl.java:626) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl$FragmentContentDriver.next(XMLDocumentFragmentScannerImpl.java:3095) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver.next(XMLDocumentScannerImpl.java:921) at com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl.next(XMLDocumentScannerImpl.java:648) at com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl.next(XMLNSDocumentScannerImpl.java:140) at com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl.scanDocument(XMLDocumentFragmentScannerImpl.java:510) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:807) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:737) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:107) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1205) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:522) at org.dom4j.io.SAXReader.read(SAXReader.java:465) at org.hibernate.cfg.Configuration.addInputStream(Configuration.java:499) ... 8 more Java Result: 1 BUILD SUCCESSFUL (total time: 1 second) hibernate.cfg.xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd"> <hibernate-configuration> <session-factory> <property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property> <property name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property> <property name="hibernate.connection.url">jdbc:mysql://localhost:3306/hibernate</property> <property name="hibernate.connection.username">root</property> </session-factory> </hibernate-configuration> Tbluser.hbm.xml <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated Oct 25, 2009 2:37:30 AM by Hibernate Tools 3.2.1.GA --> <hibernate-mapping> <class name="hibernate.Tbluser" table="tbluser" catalog="hibernate"> <id name="userId" type="java.lang.Integer"> <column name="userID" /> <generator class="identity" /> </id> <property name="username" type="string"> <column name="username" length="50" /> </property> <property name="password" type="string"> <column name="password" length="50" /> </property> <property name="email" type="string"> <column name="email" length="50" /> </property> <property name="phone" type="string"> <column name="phone" length="50" /> </property> <property name="groupId" type="java.lang.Integer"> <column name="groupID" /> </property> </class> </hibernate-mapping> Tbluser.java package hibernate; // Generated Oct 25, 2009 2:37:30 AM by Hibernate Tools 3.2.1.GA /** * Tbluser generated by hbm2java */ public class Tbluser implements java.io.Serializable { private Integer userId; private String username; private String password; private String email; private String phone; private Integer groupId; public Tbluser() { } public Tbluser(String username, String password, String email, String phone, Integer groupId) { this.username = username; this.password = password; this.email = email; this.phone = phone; this.groupId = groupId; } public Integer getUserId() { return this.userId; } public void setUserId(Integer userId) { this.userId = userId; } public String getUsername() { return this.username; } public void setUsername(String username) { this.username = username; } public String getPassword() { return this.password; } public void setPassword(String password) { this.password = password; } public String getEmail() { return this.email; } public void setEmail(String email) { this.email = email; } public String getPhone() { return this.phone; } public void setPhone(String phone) { this.phone = phone; } public Integer getGroupId() { return this.groupId; } public void setGroupId(Integer groupId) { this.groupId = groupId; } }

    Read the article

  • iphone: Help with AudioToolbox Leak: Stack trace/code included here...

    - by editor guy
    Part of this app is a "Scream" button that plays random screams from cast members of a TV show. I have to bang on the app quite a while to see a memory leak in Instruments, but it's there, occasionally coming up (every 45 seconds to 2 minutes.) The leak is 3.50kb when it occurs. Haven't been able to crack it for several hours. Any help appreciated. Instruments says this is the offending code line: [appSoundPlayer play]; that's linked to from line 9 of the below stack trace: 0 libSystem.B.dylib malloc 1 libSystem.B.dylib pthread_create 2 AudioToolbox CAPThread::Start() 3 AudioToolbox GenericRunLoopThread::Start() 4 AudioToolbox AudioQueueNew(bool, AudioStreamBasicDescription const*, TCACallback const&, CACallbackTarget const&, unsigned long, OpaqueAudioQueue*) 5 AudioToolbox AudioQueueNewOutput 6 AVFoundation allocAudioQueue(AVAudioPlayer, AudioPlayerImpl*) 7 AVFoundation prepareToPlayQueue(AVAudioPlayer*, AudioPlayerImpl*) 8 AVFoundation -[AVAudioPlayer prepareToPlay] 9 Scream Queens -[ScreamViewController scream:] /Users/laptop2/Desktop/ScreamQueens Versions/ScreamQueens25/Scream Queens/Classes/../ScreamViewController.m:210 10 CoreFoundation -[NSObject performSelector:withObject:withObject:] 11 UIKit -[UIApplication sendAction:to:from:forEvent:] 12 UIKit -[UIApplication sendAction:toTarget:fromSender:forEvent:] 13 UIKit -[UIControl sendAction:to:forEvent:] 14 UIKit -[UIControl(Internal) _sendActionsForEvents:withEvent:] 15 UIKit -[UIControl touchesEnded:withEvent:] 16 UIKit -[UIWindow _sendTouchesForEvent:] 17 UIKit -[UIWindow sendEvent:] 18 UIKit -[UIApplication sendEvent:] 19 UIKit _UIApplicationHandleEvent 20 GraphicsServices PurpleEventCallback 21 CoreFoundation CFRunLoopRunSpecific 22 CoreFoundation CFRunLoopRunInMode 23 GraphicsServices GSEventRunModal 24 UIKit -[UIApplication _run] 25 UIKit UIApplicationMain 26 Scream Queens main /Users/laptop2/Desktop/ScreamQueens Versions/ScreamQueens25/Scream Queens/main.m:14 27 Scream Queens start Here's .h: #import <UIKit/UIKit.h> #import <AVFoundation/AVFoundation.h> #import <MediaPlayer/MediaPlayer.h> #import <AudioToolbox/AudioToolbox.h> #import <MessageUI/MessageUI.h> #import <MessageUI/MFMailComposeViewController.h> @interface ScreamViewController : UIViewController <UIApplicationDelegate, AVAudioPlayerDelegate, MFMailComposeViewControllerDelegate> { //AudioPlayer related AVAudioPlayer *appSoundPlayer; NSURL *soundFileURL; BOOL interruptedOnPlayback; BOOL playing; //Scream button related IBOutlet UIButton *screamButton; int currentScreamIndex; NSString *currentScream; NSMutableArray *screams; NSMutableArray *personScreaming; NSMutableArray *photoArray; int currentSayingsIndex; NSString *currentButtonSaying; NSMutableArray *funnyButtonSayings; IBOutlet UILabel *funnyButtonSayingsLabel; IBOutlet UILabel *personScreamingField; IBOutlet UIImageView *personScreamingImage; //Mailing the scream related IBOutlet UILabel *mailStatusMessage; IBOutlet UIButton *shareButton; } //AudioPlayer related @property (nonatomic, retain) AVAudioPlayer *appSoundPlayer; @property (nonatomic, retain) NSURL *soundFileURL; @property (readwrite) BOOL interruptedOnPlayback; @property (readwrite) BOOL playing; //Scream button related @property (nonatomic, retain) UIButton *screamButton; @property (nonatomic, retain) NSMutableArray *screams; @property (nonatomic, retain) NSMutableArray *personScreaming; @property (nonatomic, retain) NSMutableArray *photoArray; @property (nonatomic, retain) UILabel *personScreamingField; @property (nonatomic, retain) UIImageView *personScreamingImage; @property (nonatomic, retain) NSMutableArray *funnyButtonSayings; @property (nonatomic, retain) UILabel *funnyButtonSayingsLabel; //Mailing the scream related @property (nonatomic, retain) IBOutlet UILabel *mailStatusMessage; @property (nonatomic, retain) IBOutlet UIButton *shareButton; //Scream Button - (IBAction) scream: (id) sender; //Mail the scream - (IBAction) showPicker: (id)sender; - (void)displayComposerSheet; - (void)launchMailAppOnDevice; @end Here's the top of .m: #import "ScreamViewController.h" //top of code has Audio session callback function for responding to audio route changes (from Apple's code), then my code continues... @implementation ScreamViewController @synthesize appSoundPlayer; // AVAudioPlayer object for playing the selected scream @synthesize soundFileURL; // Path to the scream @synthesize interruptedOnPlayback; // Was application interrupted during audio playback @synthesize playing; // Track playing/not playing state @synthesize screamButton; //Press this button, girls scream. @synthesize screams; //Mutable array holding strings pointing to sound files of screams. @synthesize personScreaming; //Mutable array tracking the person doing the screaming @synthesize photoArray; //Mutable array holding strings pointing to photos of screaming girls @synthesize personScreamingField; //Field updates to announce which girl is screaming. @synthesize personScreamingImage; //Updates to show image of the screamer. @synthesize funnyButtonSayings; //Mutable array holding the sayings @synthesize funnyButtonSayingsLabel; //Label that updates with the funnyButtonSayings @synthesize mailStatusMessage; //did the email go out @synthesize shareButton; //share scream via email Next line begins the block with the offending code: - (IBAction) scream: (id) sender { //Play a click sound effect SystemSoundID soundID; NSString *sfxPath = [[NSBundle mainBundle] pathForResource:@"aClick" ofType:@"caf"]; AudioServicesCreateSystemSoundID((CFURLRef)[NSURL fileURLWithPath:sfxPath],&soundID); AudioServicesPlaySystemSound (soundID); // Because someone may slam the scream button over and over, //must stop current sound, then begin next if ([self appSoundPlayer] != nil) { [[self appSoundPlayer] setDelegate:nil]; [[self appSoundPlayer] stop]; [self setAppSoundPlayer: nil]; } //after selecting a random index in the array (did that in View Did Load), //we move to the next scream on each click. //First check... //Are we past the end of the array? if (currentScreamIndex == [screams count]) { currentScreamIndex = 0; } //Get the string at the index in the personScreaming array currentScream = [screams objectAtIndex: currentScreamIndex]; //Get the string at the index in the personScreaming array NSString *screamer = [personScreaming objectAtIndex:currentScreamIndex]; //Log the string to the console NSLog (@"playing scream: %@", screamer); // Display the string in the personScreamingField field NSString *listScreamer = [NSString stringWithFormat:@"scream by: %@", screamer]; [personScreamingField setText:listScreamer]; // Gets the file system path to the scream to play. NSString *soundFilePath = [[NSBundle mainBundle] pathForResource: currentScream ofType: @"caf"]; // Converts the sound's file path to an NSURL object NSURL *newURL = [[NSURL alloc] initFileURLWithPath: soundFilePath]; self.soundFileURL = newURL; [newURL release]; [[AVAudioSession sharedInstance] setDelegate: self]; [[AVAudioSession sharedInstance] setCategory: AVAudioSessionCategoryPlayback error: nil]; // Registers the audio route change listener callback function AudioSessionAddPropertyListener ( kAudioSessionProperty_AudioRouteChange, audioRouteChangeListenerCallback, self ); // Activates the audio session. NSError *activationError = nil; [[AVAudioSession sharedInstance] setActive: YES error: &activationError]; // Instantiates the AVAudioPlayer object, initializing it with the sound AVAudioPlayer *newPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL: soundFileURL error: nil]; //Error check and continue if (newPlayer != nil) { self.appSoundPlayer = newPlayer; [newPlayer release]; [appSoundPlayer prepareToPlay]; [appSoundPlayer setVolume: 1.0]; [appSoundPlayer setDelegate:self]; //NEXT LINE IS FLAGGED BY INSTRUMENTS AS LEAKY [appSoundPlayer play]; playing = YES; //Get the string at the index in the photoArray array NSString *screamerPic = [photoArray objectAtIndex:currentScreamIndex]; //Log the string to the console NSLog (@"displaying photo: %@", screamerPic); // Display the image of the person screaming personScreamingImage.image = [UIImage imageNamed:screamerPic]; //show the share button shareButton.hidden = NO; mailStatusMessage.hidden = NO; mailStatusMessage.text = @"share!"; //Get the string at the index in the funnySayings array currentSayingsIndex = random() % [funnyButtonSayings count]; currentButtonSaying = [funnyButtonSayings objectAtIndex: currentSayingsIndex]; NSString *theSaying = [funnyButtonSayings objectAtIndex:currentSayingsIndex]; [funnyButtonSayingsLabel setText: theSaying]; currentScreamIndex++; } } Here's my dealloc: - (void)dealloc { [appSoundPlayer stop]; [appSoundPlayer release], appSoundPlayer = nil; [screamButton release], screamButton = nil; [mailStatusMessage release], mailStatusMessage = nil; [personScreamingField release], personScreamingField = nil; [personScreamingImage release], personScreamingImage = nil; [funnyButtonSayings release], funnyButtonSayings = nil; [funnyButtonSayingsLabel release], funnyButtonSayingsLabel = nil; [screams release], screams = nil; [personScreaming release], personScreaming = nil; [soundFileURL release]; [super dealloc]; } @end Thanks so much for reading this far! Any input appreciated.

    Read the article

  • OpenLDAP and SSL

    - by Stormshadow
    I am having trouble trying to connect to a secure OpenLDAP server which I have set up. On running my LDAP client code java -Djavax.net.debug=ssl LDAPConnector I get the following exception trace (java version 1.6.0_17) trigger seeding of SecureRandom done seeding SecureRandom %% No cached client session *** ClientHello, TLSv1 RandomCookie: GMT: 1256110124 bytes = { 224, 19, 193, 148, 45, 205, 108, 37, 101, 247, 112, 24, 157, 39, 111, 177, 43, 53, 206, 224, 68, 165, 55, 185, 54, 203, 43, 91 } Session ID: {} Cipher Suites: [SSL_RSA_WITH_RC4_128_MD5, SSL_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_DSS_WITH_AES_128_CBC_SHA, SSL_RSA_W ITH_3DES_EDE_CBC_SHA, SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, SSL_DHE_DSS_WITH_3DES_EDE_CBC_SHA, SSL_RSA_WITH_DES_CBC_SHA, SSL_DHE_RSA_WITH_DES_CBC_SHA, SSL_DHE_DSS_WITH_DES_CBC_SH A, SSL_RSA_EXPORT_WITH_RC4_40_MD5, SSL_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_RSA_EXPORT_WITH_DES40_CBC_SHA, SSL_DHE_DSS_EXPORT_WITH_DES40_CBC_SHA] Compression Methods: { 0 } *** Thread-0, WRITE: TLSv1 Handshake, length = 73 Thread-0, WRITE: SSLv2 client hello message, length = 98 Thread-0, received EOFException: error Thread-0, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake Thread-0, SEND TLSv1 ALERT: fatal, description = handshake_failure Thread-0, WRITE: TLSv1 Alert, length = 2 Thread-0, called closeSocket() main, handling exception: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake javax.naming.CommunicationException: simple bind failed: ldap.natraj.com:636 [Root exception is javax.net.ssl.SSLHandshakeException: Remote host closed connection during hands hake] at com.sun.jndi.ldap.LdapClient.authenticate(Unknown Source) at com.sun.jndi.ldap.LdapCtx.connect(Unknown Source) at com.sun.jndi.ldap.LdapCtx.<init>(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(Unknown Source) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(Unknown Source) at javax.naming.spi.NamingManager.getInitialContext(Unknown Source) at javax.naming.InitialContext.getDefaultInitCtx(Unknown Source) at javax.naming.InitialContext.init(Unknown Source) at javax.naming.InitialContext.<init>(Unknown Source) at javax.naming.directory.InitialDirContext.<init>(Unknown Source) at LDAPConnector.CallSecureLDAPServer(LDAPConnector.java:43) at LDAPConnector.main(LDAPConnector.java:237) Caused by: javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.performInitialHandshake(Unknown Source) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(Unknown Source) at com.sun.net.ssl.internal.ssl.AppInputStream.read(Unknown Source) at java.io.BufferedInputStream.fill(Unknown Source) at java.io.BufferedInputStream.read1(Unknown Source) at java.io.BufferedInputStream.read(Unknown Source) at com.sun.jndi.ldap.Connection.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException: SSL peer shut down incorrectly at com.sun.net.ssl.internal.ssl.InputRecord.read(Unknown Source) ... 9 more I am able to connect to the same secure LDAP server however if I use another version of java (1.6.0_14) I have created and installed the server certificates in the cacerts of both the JRE's as mentioned in this guide -- OpenLDAP with SSL When I run ldapsearch -x on the server I get # extended LDIF # # LDAPv3 # base <dc=localdomain> (default) with scope subtree # filter: (objectclass=*) # requesting: ALL # # localdomain dn: dc=localdomain objectClass: top objectClass: dcObject objectClass: organization o: localdomain dc: localdomain # admin, localdomain dn: cn=admin,dc=localdomain objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator # search result search: 2 result: 0 Success # numResponses: 3 # numEntries: 2 On running openssl s_client -connect ldap.natraj.com:636 -showcerts , I obtain the self signed certificate. My slapd.conf file is as follows ####################################################################### # Global Directives: # Features to permit #allow bind_v2 # Schema and objectClass definitions include /etc/ldap/schema/core.schema include /etc/ldap/schema/cosine.schema include /etc/ldap/schema/nis.schema include /etc/ldap/schema/inetorgperson.schema # Where the pid file is put. The init.d script # will not stop the server if you change this. pidfile /var/run/slapd/slapd.pid # List of arguments that were passed to the server argsfile /var/run/slapd/slapd.args # Read slapd.conf(5) for possible values loglevel none # Where the dynamically loaded modules are stored modulepath /usr/lib/ldap moduleload back_hdb # The maximum number of entries that is returned for a search operation sizelimit 500 # The tool-threads parameter sets the actual amount of cpu's that is used # for indexing. tool-threads 1 ####################################################################### # Specific Backend Directives for hdb: # Backend specific directives apply to this backend until another # 'backend' directive occurs backend hdb ####################################################################### # Specific Backend Directives for 'other': # Backend specific directives apply to this backend until another # 'backend' directive occurs #backend <other> ####################################################################### # Specific Directives for database #1, of type hdb: # Database specific directives apply to this databasse until another # 'database' directive occurs database hdb # The base of your directory in database #1 suffix "dc=localdomain" # rootdn directive for specifying a superuser on the database. This is needed # for syncrepl. rootdn "cn=admin,dc=localdomain" # Where the database file are physically stored for database #1 directory "/var/lib/ldap" # The dbconfig settings are used to generate a DB_CONFIG file the first # time slapd starts. They do NOT override existing an existing DB_CONFIG # file. You should therefore change these settings in DB_CONFIG directly # or remove DB_CONFIG and restart slapd for changes to take effect. # For the Debian package we use 2MB as default but be sure to update this # value if you have plenty of RAM dbconfig set_cachesize 0 2097152 0 # Sven Hartge reported that he had to set this value incredibly high # to get slapd running at all. See http://bugs.debian.org/303057 for more # information. # Number of objects that can be locked at the same time. dbconfig set_lk_max_objects 1500 # Number of locks (both requested and granted) dbconfig set_lk_max_locks 1500 # Number of lockers dbconfig set_lk_max_lockers 1500 # Indexing options for database #1 index objectClass eq # Save the time that the entry gets modified, for database #1 lastmod on # Checkpoint the BerkeleyDB database periodically in case of system # failure and to speed slapd shutdown. checkpoint 512 30 # Where to store the replica logs for database #1 # replogfile /var/lib/ldap/replog # The userPassword by default can be changed # by the entry owning it if they are authenticated. # Others should not be able to see it, except the # admin entry below # These access lines apply to database #1 only access to attrs=userPassword,shadowLastChange by dn="cn=admin,dc=localdomain" write by anonymous auth by self write by * none # Ensure read access to the base for things like # supportedSASLMechanisms. Without this you may # have problems with SASL not knowing what # mechanisms are available and the like. # Note that this is covered by the 'access to *' # ACL below too but if you change that as people # are wont to do you'll still need this if you # want SASL (and possible other things) to work # happily. access to dn.base="" by * read # The admin dn has full write access, everyone else # can read everything. access to * by dn="cn=admin,dc=localdomain" write by * read # For Netscape Roaming support, each user gets a roaming # profile for which they have write access to #access to dn=".*,ou=Roaming,o=morsnet" # by dn="cn=admin,dc=localdomain" write # by dnattr=owner write ####################################################################### # Specific Directives for database #2, of type 'other' (can be hdb too): # Database specific directives apply to this databasse until another # 'database' directive occurs #database <other> # The base of your directory for database #2 #suffix "dc=debian,dc=org" ####################################################################### # SSL: # Uncomment the following lines to enable SSL and use the default # snakeoil certificates. #TLSCertificateFile /etc/ssl/certs/ssl-cert-snakeoil.pem #TLSCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key TLSCipherSuite TLS_RSA_AES_256_CBC_SHA TLSCACertificateFile /etc/ldap/ssl/server.pem TLSCertificateFile /etc/ldap/ssl/server.pem TLSCertificateKeyFile /etc/ldap/ssl/server.pem My ldap.conf file is # # LDAP Defaults # # See ldap.conf(5) for details # This file should be world readable but not world writable. HOST ldap.natraj.com PORT 636 BASE dc=localdomain URI ldaps://ldap.natraj.com TLS_CACERT /etc/ldap/ssl/server.pem TLS_REQCERT allow #SIZELIMIT 12 #TIMELIMIT 15 #DEREF never

    Read the article

  • Connection drop problem with Hibernate-mysql-c3p0

    - by user344788
    hi all, This is an issue which I have seen all across the web. I will bring it up again as till now I don't have a fix for the same. I am using hibernate 3. mysql 5 and latest c3p0 jar. I am getting a broken pipe exception. Following is my hibernate.cfg file. com.mysql.jdbc.Driver org.hibernate.dialect.MySQLDialect <property name="hibernate.show_sql">true</property> <property name="hibernate.use_sql_comments">true</property> <property name="hibernate.current_session_context_class">thread</property> <property name="connection.autoReconnect">true</property> <property name="connection.autoReconnectForPools">true</property> <property name="connection.is-connection-validation-required">true</property> <!--<property name="c3p0.min_size">5</property> <property name="c3p0.max_size">20</property> <property name="c3p0.timeout">1800</property> <property name="c3p0.max_statements">50</property> --><property name="hibernate.connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider </property> <property name="hibernate.c3p0.acquireRetryAttempts">30</property> <property name="hibernate.c3p0.acquireIncrement">5</property> <property name="hibernate.c3p0.automaticTestTable">C3P0TestTable</property> <property name="hibernate.c3p0.idleConnectionTestPeriod">36000</property> <property name="hibernate.c3p0.initialPoolSize">20</property> <property name="hibernate.c3p0.maxPoolSize">100</property> <property name="hibernate.c3p0.maxIdleTime">1200</property> <property name="hibernate.c3p0.maxStatements">50</property> <property name="hibernate.c3p0.minPoolSize">10</property>--> My connection pooling is occurring fine. During the day it is fine , but once i keep it idle over the night ,next day I find it giving me broken connection error. public class HibernateUtil { private static Logger log = Logger.getLogger(HibernateUtil.class); //private static Log log = LogFactory.getLog(HibernateUtil.class); private static Configuration configuration; private static SessionFactory sessionFactory; static { // Create the initial SessionFactory from the default configuration files try { log.debug("Initializing Hibernate"); // Read hibernate.properties, if present configuration = new Configuration(); // Use annotations: configuration = new AnnotationConfiguration(); // Read hibernate.cfg.xml (has to be present) configuration.configure(); // Build and store (either in JNDI or static variable) rebuildSessionFactory(configuration); log.debug("Hibernate initialized, call HibernateUtil.getSessionFactory()"); } catch (Throwable ex) { // We have to catch Throwable, otherwise we will miss // NoClassDefFoundError and other subclasses of Error log.error("Building SessionFactory failed.", ex); throw new ExceptionInInitializerError(ex); } } /** * Returns the Hibernate configuration that was used to build the SessionFactory. * * @return Configuration */ public static Configuration getConfiguration() { return configuration; } /** * Returns the global SessionFactory either from a static variable or a JNDI lookup. * * @return SessionFactory */ public static SessionFactory getSessionFactory() { String sfName = configuration.getProperty(Environment.SESSION_FACTORY_NAME); System.out.println("Current s name is "+sfName); if ( sfName != null) { System.out.println("Looking up SessionFactory in JNDI"); log.debug("Looking up SessionFactory in JNDI"); try { System.out.println("Returning new sssion factory"); return (SessionFactory) new InitialContext().lookup(sfName); } catch (NamingException ex) { throw new RuntimeException(ex); } } else if (sessionFactory == null) { System.out.println("calling rebuild session factory now"); rebuildSessionFactory(); } return sessionFactory; } /** * Closes the current SessionFactory and releases all resources. * <p> * The only other method that can be called on HibernateUtil * after this one is rebuildSessionFactory(Configuration). */ public static void shutdown() { log.debug("Shutting down Hibernate"); // Close caches and connection pools getSessionFactory().close(); // Clear static variables sessionFactory = null; } /** * Rebuild the SessionFactory with the static Configuration. * <p> * Note that this method should only be used with static SessionFactory * management, not with JNDI or any other external registry. This method also closes * the old static variable SessionFactory before, if it is still open. */ public static void rebuildSessionFactory() { log.debug("Using current Configuration to rebuild SessionFactory"); rebuildSessionFactory(configuration); } /** * Rebuild the SessionFactory with the given Hibernate Configuration. * <p> * HibernateUtil does not configure() the given Configuration object, * it directly calls buildSessionFactory(). This method also closes * the old static variable SessionFactory before, if it is still open. * * @param cfg */ public static void rebuildSessionFactory(Configuration cfg) { log.debug("Rebuilding the SessionFactory from given Configuration"); if (sessionFactory != null && !sessionFactory.isClosed()) sessionFactory.close(); if (cfg.getProperty(Environment.SESSION_FACTORY_NAME) != null) { log.debug("Managing SessionFactory in JNDI"); cfg.buildSessionFactory(); } else { log.debug("Holding SessionFactory in static variable"); sessionFactory = cfg.buildSessionFactory(); } configuration = cfg; } } Above is my code for the session factory. And I have only select operations . And below is the method which is used most often to execute my select queries. One tricky thing which I am not understanding is in my findById method i am using this line of code getSession().beginTransaction(); without which it gives me an error saying that this cannot happpen without a transaction. But nowhere I am closing this transaction. And thers no method to close a transaction apart from commit or rollback (as far as i know) which are not applicable for select statements. public T findById(ID id, boolean lock) throws HibernateException, DAOException { log.debug("findNyId invoked with ID ="+id+"and lock ="+lock); T entity; getSession().beginTransaction(); if (lock) entity = (T) getSession().load(getPersistentClass(), id, LockMode.UPGRADE); else entity = (T) getSession().load(getPersistentClass(), id); return entity; } Can anyone please suggest what can I do ? I have tried out almost every solution available via googling, on stackoverlow or on hibernate forums with no avail. (And increasing wait_timeout on mysql is not a valid option in my case).

    Read the article

  • Codeigniter: Using URIs with forms

    - by Kevin Brown
    I'm using URIs to direct a function in a library: $id = $this->CI->session->userdata('id'); $URI = $this->CI->uri->uri_string(); $new = "new"; if(strpos($URI, $new) === FALSE){ $method = "update"; } elseif(strpos($URI, $new) !== FALSE){ $method = "create"; } So I have two if statements directing what information to if ($method === 'update') { // Modify form, first load $this->CI->db->from('be_survey'); $this->CI->db->where('user_id' , $id); $survey = $this->CI->db->get(); $user = array_merge($user->row_array(),$survey->row_array()); $this->CI->validation->set_default_value($user); // Display page $data['user'] = $user; } $this->CI->validation->set_rules($rules); if ( $this->CI->validation->run() === FALSE ) { // Output any errors $this->CI->validation->output_errors(); } else { // Submit form $this->_submit($method); } Submit function: function _submit($method) { //Submit and Update for current User $id = $this->CI->session->userdata('id'); $this->CI->db->select('users.id, users.username, users.email, profiles.firstname, profiles.manager_id'); $this->CI->db->from('be_users' . " users"); $this->CI->db->join('be_user_profiles' . " profiles",'users.id=profiles.user_id'); $this->CI->db->having('id', $id); $email_data['user'] = $this->CI->db->get(); $email_data['user'] = $email_data['user']->row(); $manager_id = $email_data['user']->manager_id; $this->CI->db->select('firstname','email')->from('be_user_profiles')->where('user_id', $manager_id); $email_data['manager'] = $this->CI->db->get(); $email_data['manager'] = $email_data['manager']->row(); // Fetch what they entered in the form for($i=1;$i<18;$i++){ $survey["a_".$i]= $this->CI->input->post('a_'.$i); } for($i=1;$i<15;$i++){ $survey["b_".$i]= $this->CI->input->post('b_'.$i); } for($i=1;$i<12;$i++){ $survey["c_".$i]= $this->CI->input->post('c_'.$i); } $profile['firstname'] = $this->CI->input->post('firstname'); $profile['lastname'] = $this->CI->input->post('lastname'); $profile['test_date'] = date ("Y-m-d H:i:s"); $profile['company_name'] = $this->CI->input->post('company_name'); $profile['company_address'] = $this->CI->input->post('company_address'); $profile['company_city'] = $this->CI->input->post('company_city'); $profile['company_phone'] = $this->CI->input->post('company_phone'); $profile['company_state'] = $this->CI->input->post('company_state'); $profile['company_zip'] = $this->CI->input->post('company_zip'); $profile['job_title'] = $this->CI->input->post('job_title'); $profile['job_type'] = $this->CI->input->post('job_type'); $profile['job_time'] = $this->CI->input->post('job_time'); $profile['department'] = $this->CI->input->post('department'); $profile['vision'] = $this->CI->input->post('vision'); $profile['height'] = $this->CI->input->post('height'); $profile['weight'] = $this->CI->input->post('weight'); $profile['hand_dominance'] = $this->CI->input->post('hand_dominance'); $profile['areas_of_fatigue'] = $this->CI->input->post('areas_of_fatigue'); $profile['job_description'] = $this->CI->input->post('job_description'); $profile['injury_review'] = $this->CI->input->post('injury_review'); $profile['job_positive'] = $this->CI->input->post('job_positive'); $profile['risk_factors'] = $this->CI->input->post('risk_factors'); $profile['job_improvement_short'] = $this->CI->input->post('job_improvement_short'); $profile['job_improvement_long'] = $this->CI->input->post('job_improvement_long'); if ($method == "update") { //Begin db transmission $this->CI->db->trans_begin(); $this->CI->home_model->update('Survey',$survey, array('user_id' => $id)); $this->CI->db->update('be_user_profiles',$profile, array('user_id' => $id)); if ($this->CI->db->trans_status() === FALSE) { flashMsg('error','There was a problem entering your test! Please contact an administrator.'); redirect('survey','location'); } else { //Get credits of user and subtract 1 $this->CI->db->set('credits', 'credits -1', FALSE); $this->CI->db->update('be_user_profiles',$profile, array('user_id' => $manager_id)); //Mark the form completed. $this->CI->db->set('test_complete', '1'); $this->CI->db->where('user_id', $id)->update('be_user_profiles'); // Stuff worked... $this->CI->db->trans_commit(); //Get Manager Information $this->CI->db->select('users.id, users.username, users.email, profiles.firstname'); $this->CI->db->from('be_users' . " users"); $this->CI->db->join('be_user_profiles' . " profiles",'users.id=profiles.user_id'); $this->CI->db->having('id', $email_data['user']->manager_id); $email_data['manager'] = $this->CI->db->get(); $email_data['manager'] = $email_data['manager']->row(); //Email User $this->CI->load->library('User_email'); $data_user = array( 'firstname'=>$email_data['user']->firstname, 'email'=> $email_data['user']->email, 'user_completed'=>$email_data['user']->firstname, 'site_name'=>$this->CI->preference->item('site_name'), 'site_url'=>base_url() ); //Email Manager $data_manager = array( 'firstname'=>$email_data['manager']->firstname, 'email'=> $email_data['manager']->email, 'user_completed'=>$email_data['user']->firstname, 'site_name'=>$this->CI->preference->item('site_name'), 'site_url'=>base_url() ); $this->CI->user_email->send($email_data['manager']->email,'Completed the Assessment Tool','public/email_manager_complete',$data_manager); $this->CI->user_email->send($email_data['user']->email,'Completed the Assessment Tool','public/email_user_complete',$data_user); flashMsg('success','You finished the assessment successfully!'); redirect('home','location'); } } //Create New User elseif ($method == "create") { // Build $profile['user_id'] = $id; $profile['manager_id'] = $manager_id; $profile['test_complete'] = '1'; $survey['user_id'] = $id; $this->CI->db->trans_begin(); // Add user_profile details to DB $this->CI->db->insert('be_user_profiles',$profile); $this->CI->db->insert('be_survey',$survey); if ($this->CI->db->trans_status() === FALSE) { // Registration failed $this->CI->db->trans_rollback(); flashMsg('error',$this->CI->lang->line('userlib_registration_failed')); redirect('auth/register','location'); } else { // User registered $this->CI->db->trans_commit(); flashMsg('success',$this->CI->lang->line('userlib_registration_success')); redirect($this->CI->config->item('userlib_action_register'),'location'); } } } The submit function is similar, updating the db if $method == "update", and inserting if the method == "create". The problem is, when the form is submitted, it doesn't take into account the url b/c the form submits to the function "survey", which passes data to the lib function, so things are always updated, never created. *How can I pass $method to the _submit() function correctly?!*

    Read the article

  • Please Critique this PHP Login Script

    - by NightMICU
    Greetings, A site I developed was recently compromised, most likely by a brute force or Rainbow Table attack. The original log-in script did not have a SALT, passwords were stored in MD5. Below is an updated script, complete with SALT and IP address banning. In addition, it will send a Mayday email & SMS and disable the account should the same IP address or account attempt 4 failed log-ins. Please look it over and let me know what could be improved, what is missing, and what is just plain strange. Many thanks! <?php //Start session session_start(); //Include DB config include $_SERVER['DOCUMENT_ROOT'] . '/includes/pdo_conn.inc.php'; //Error message array $errmsg_arr = array(); $errflag = false; //Function to sanitize values received from the form. Prevents SQL injection function clean($str) { $str = @trim($str); if(get_magic_quotes_gpc()) { $str = stripslashes($str); } return $str; } //Define a SALT, the one here is for demo define('SALT', '63Yf5QNA'); //Sanitize the POST values $login = clean($_POST['login']); $password = clean($_POST['password']); //Encrypt password $encryptedPassword = md5(SALT . $password); //Input Validations //Obtain IP address and check for past failed attempts $ip_address = $_SERVER['REMOTE_ADDR']; $checkIPBan = $db->prepare("SELECT COUNT(*) FROM ip_ban WHERE ipAddr = ? OR login = ?"); $checkIPBan->execute(array($ip_address, $login)); $numAttempts = $checkIPBan->fetchColumn(); //If there are 4 failed attempts, send back to login and temporarily ban IP address if ($numAttempts == 1) { $getTotalAttempts = $db->prepare("SELECT attempts FROM ip_ban WHERE ipAddr = ? OR login = ?"); $getTotalAttempts->execute(array($ip_address, $login)); $totalAttempts = $getTotalAttempts->fetch(); $totalAttempts = $totalAttempts['attempts']; if ($totalAttempts >= 4) { //Send Mayday SMS $to = "[email protected]"; $subject = "Banned Account - $login"; $mailheaders = 'From: [email protected]' . "\r\n"; $mailheaders .= 'Reply-To: [email protected]' . "\r\n"; $mailheaders .= 'MIME-Version: 1.0' . "\r\n"; $mailheaders .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n"; $msg = "<p>IP Address - " . $ip_address . ", Username - " . $login . "</p>"; mail($to, $subject, $msg, $mailheaders); $setAccountBan = $db->query("UPDATE ip_ban SET isBanned = 1 WHERE ipAddr = '$ip_address'"); $setAccountBan->execute(); $errmsg_arr[] = 'Too Many Login Attempts'; $errflag = true; } } if($login == '') { $errmsg_arr[] = 'Login ID missing'; $errflag = true; } if($password == '') { $errmsg_arr[] = 'Password missing'; $errflag = true; } //If there are input validations, redirect back to the login form if($errflag) { $_SESSION['ERRMSG_ARR'] = $errmsg_arr; session_write_close(); header('Location: http://somewhere.com/login.php'); exit(); } //Query database $loginSQL = $db->prepare("SELECT password FROM user_control WHERE username = ?"); $loginSQL->execute(array($login)); $loginResult = $loginSQL->fetch(); //Compare passwords if($loginResult['password'] == $encryptedPassword) { //Login Successful session_regenerate_id(); //Collect details about user and assign session details $getMemDetails = $db->prepare("SELECT * FROM user_control WHERE username = ?"); $getMemDetails->execute(array($login)); $member = $getMemDetails->fetch(); $_SESSION['SESS_MEMBER_ID'] = $member['user_id']; $_SESSION['SESS_USERNAME'] = $member['username']; $_SESSION['SESS_FIRST_NAME'] = $member['name_f']; $_SESSION['SESS_LAST_NAME'] = $member['name_l']; $_SESSION['SESS_STATUS'] = $member['status']; $_SESSION['SESS_LEVEL'] = $member['level']; //Get Last Login $_SESSION['SESS_LAST_LOGIN'] = $member['lastLogin']; //Set Last Login info $updateLog = $db->prepare("UPDATE user_control SET lastLogin = DATE_ADD(NOW(), INTERVAL 1 HOUR), ip_addr = ? WHERE user_id = ?"); $updateLog->execute(array($ip_address, $member['user_id'])); session_write_close(); //If there are past failed log-in attempts, delete old entries if ($numAttempts > 0) { //Past failed log-ins from this IP address. Delete old entries $deleteIPBan = $db->prepare("DELETE FROM ip_ban WHERE ipAddr = ?"); $deleteIPBan->execute(array($ip_address)); } if ($member['level'] != "3" || $member['status'] == "Suspended") { header("location: http://somewhere.com"); } else { header('Location: http://somewhere.com'); } exit(); } else { //Login failed. Add IP address and other details to ban table if ($numAttempts < 1) { //Add a new entry to IP Ban table $addBanEntry = $db->prepare("INSERT INTO ip_ban (ipAddr, login, attempts) VALUES (?,?,?)"); $addBanEntry->execute(array($ip_address, $login, 1)); } else { //increment Attempts count $updateBanEntry = $db->prepare("UPDATE ip_ban SET ipAddr = ?, login = ?, attempts = attempts+1 WHERE ipAddr = ? OR login = ?"); $updateBanEntry->execute(array($ip_address, $login, $ip_address, $login)); } header('Location: http://somewhere.com/login.php'); exit(); } ?>

    Read the article

  • Mediawiki authenication replacement showing "Login Required" instead of signing user into wiki

    - by arcdegree
    I'm fairly to MediaWiki and needed a way to automatically log users in after they authenticated to a central server (which creates a session and cookie for applications to use). I wrote a custom authentication extension based off of the LDAP Authentication extension and a few others. The extension simply needs to read some session data to create or update a user and then log them in automatically. All the authentication is handled externally. A user would not be able to even access the wiki website without logging in externally. This extension was placed into production which replaced the old standard MediaWiki authentication system. I also merged user accounts to prepare for the change. By default, a user must be logged in to view, edit, or otherwise do anything in the wiki. My problem is that I found if a user had previously used the built-in MediaWiki authentication system and returned to the wiki, my extension would attempt to auto-login the user, however, they would see a "Login Required" page instead of the page they requested like they were an anonymous user. If the user then refreshed the page, they would be able to navigate, edit, etc. From what I can tell, this issue resolves itself after the UserID cookie is reset or created fresh (but has been known to strangely come up sometimes). To replicate, if there is an older User ID in the "USERID" cookie, the user is shown the "Login Required" page which is a poor user experience. Another way of showing this page is by removing the user account from the database and refreshing the wiki page. As a result, the user will again see the "Login Required" page. Does anyone know how I can use debugging to find out why MediaWiki thinks the user is not signed in when the cookies are set properly and all it takes is a page refresh? Here is my extension (simplified a little for this post): <?php $wgExtensionCredits['parserhook'][] = array ( 'name' => 'MyExtension', 'author' => '', ); if (!class_exists('AuthPlugin')) { require_once ( 'AuthPlugin.php' ); } class MyExtensionPlugin extends AuthPlugin { function userExists($username) { return true; } function authenticate($username, $password) { $id = $_SESSION['id']; if($username = $id) { return true; } else { return false; } } function updateUser(& $user) { $name = $user->getName(); $user->load(); $user->mPassword = ''; $user->mNewpassword = ''; $user->mNewpassTime = null; $user->setRealName($_SESSION['name']); $user->setEmail($_SESSION['email']); $user->mEmailAuthenticated = wfTimestampNow(); $user->saveSettings(); return true; } function modifyUITemplate(& $template) { $template->set('useemail', false); $template->set('remember', false); $template->set('create', false); $template->set('domain', false); $template->set('usedomain', false); } function autoCreate() { return true; } function disallowPrefsEditByUser() { return array ( 'wpRealName' => true, 'wpUserEmail' => true, 'wpNick' => true ); } function allowPasswordChange() { return false; } function setPassword( $user, $password ) { return false; } function strict() { return true; } function initUser( & $user ) { } function updateExternalDB( $user ) { return false; } function canCreateAccounts() { return false; } function addUser( $user, $password ) { return false; } function getCanonicalName( $username ) { return $username; } } function SetupAuthMyExtension() { global $wgHooks; global $wgAuth; $wgHooks['UserLoadFromSession'][] = 'Auth_MyExtension_autologin_hook'; $wgHooks['UserLogoutComplete'][] = 'Auth_MyExtension_UserLogoutComplete'; $wgHooks['PersonalUrls'][] = 'Auth_MyExtension_personalURL_hook'; $wgAuth = new MyExtensionPlugin(); } function Auth_MyExtension_autologin_hook($user, &$return_user ) { global $wgUser; global $wgAuth; global $wgContLang; wfSetupSession(); // Give us a user, see if we're around $tmpuser = new User() ; $rc = $tmpuser->newFromSession(); $rc = $tmpuser->load(); if( $rc && $rc->isLoggedIn() ) { if ( $rc->authenticate($rc->getName(), '') ) { return true; } else { $rc->logout(); } } $id = trim($_SESSION['id']); $name = ucfirst(trim($_SESSION['name'])); if (empty($dsid)) { $result = false; // Deny access return true; } $user = User::newFromName($dsid); if (0 == $user->getID() ) { // we have a new user to add... $user->setName( $id); $user->addToDatabase(); $user->setToken(); $user->saveSettings(); $ssUpdate = new SiteStatsUpdate( 0, 0, 0, 0, 1 ); $ssUpdate->doUpdate(); } else { $user->saveToCache(); } // update email, real name, etc. $wgAuth->updateUser( $user ); $result = true; // Go ahead and log 'em in $user->setToken(); $user->saveSettings(); $user->setupSession(); $user->setCookies(); return true; } function Auth_MyExtension_personalURL_hook(& $personal_urls, & $title) { global $wgUser; unset( $personal_urls['mytalk'] ); unset($personal_urls['Userlogin']); $personal_urls['userpage']['text'] = $wgUser->getRealName(); foreach (array('login', 'anonlogin') as $k) { if (array_key_exists($k, $personal_urls)) { unset($personal_urls[$k]); } } return true; } function Auth_MyExtension_UserLogoutComplete(&$user, &$inject_html, $old_name) { setcookie( $GLOBALS['wgCookiePrefix'] . '_session', '', time() - 3600, $GLOBALS['wgCookiePath']); setcookie( $GLOBALS['wgCookiePrefix'] . 'UserName', '', time() - 3600, $GLOBALS['wgCookiePath']); setcookie( $GLOBALS['wgCookiePrefix'] . 'UserID', '', time() - 3600, $GLOBALS['wgCookiePath']); setcookie( $GLOBALS['wgCookiePrefix'] . 'Token', '', time() - 3600, $GLOBALS['wgCookiePath']); return true; } ?> Here is part of my LocalSettings.php file: ############################# # Disallow Anonymous Access ############################# $wgGroupPermissions['*']['read'] = false; $wgGroupPermissions['*']['edit'] = false; $wgGroupPermissions['*']['createpage'] = false; $wgGroupPermissions['*']['createtalk'] = false; $wgGroupPermissions['*']['createaccount'] = false; $wgShowIPinHeader = false; # For non-logged in users ############################# # Extension: MyExtension ############################# require_once("$IP/extensions/MyExtension.php"); $wgAutoLogin = true; SetupAuthMyExtension(); $wgDisableCookieCheck = true;

    Read the article

  • Strategy for using snapshots to back up Ubuntu Linux server?

    - by MountainX
    I need some backup advice for my home file server. Here are the mount points, volume groups, logical volumes and used/total space of all the volumes on my Ubuntu 8.10 home file server. / vgA/lvRoot [7.5G/50G] /tmp vgB/lvTmp [195M/30G] /var vgB/lvVar [780M/30G] swap vgB/lvSwap [16.00 GB] /media1 vgC/lvMedia1 [400G/975G] /media2 vgC/lvMedia2 [75G/295G] /boot partition (no volume group) [95M/200M] /video partition (no volume group) [450G/950G] /backups vgD/lvBackupTarget [800G/925G] /home vgE/lvHome [85G/200G] I have just added a 2.0 TB external USB drive that I would like to use to backup everything. (It will be a close fit to get it all on one 2.0 TB drive. I actually have a 2nd external USB drive if needed.) I'd like to backup "/", var, /media1, media2 and /home. I'll deal with /boot and /video separately since they are not logical volumes. For all the logical volumes I'm anticipating taking snapshots and then copying those snapshots to the 2.0 TB external USB drive. I have never done a task like that before. If I do that, I could use the tutorial I found here: http://www.howtoforge.com/linux_lvm_snapshots My questions are: What is the best overall strategy? Is it LVM snapshots, as I'm assuming? How should I prepare, subdivide and mount the 2.0 TB external USB drive? 2.a. Should I create one or more regular partitions or should I create a physical volume with one or more logical volumes? 2.b. Would it be advisable to extactly mirror the source pv/lv layout on the external drive, and if so, is this a good strategy? What's the best way to get the snapshots onto the external drive? dd? Even though this is a strategy question, feedback with actual commands is appreciated. I need step-by-step cookbook-style help because I don't do much server admin work. (Background: This is a home file server that I have rarely had to touch in about 2 years. It has done its job without much intervention. The really old PC that I used to back everything up recently failed, so I'm replacing that with the external USB drive(s) and I'd like to upgrade my backup strategy at the same time. Previously, I just copied stuff from /backups over to the other computer and that would not have made things very easy in a real restore situation. The /backups mount point contains backup copies of "most" of the important data on a file by file basis, but it does not contain copies of /boot, etc. BTW, the actual internal HDD that holds /backups is separate from the other storage devices.) EDIT: I'll propose a strategy... The idea came from a comment here: LVM mirroring VS RAID1 "LVM mirrors are for replication of a logical volume to a different physical volume. It's essentially meant to "move the data to a different disk". The mirror is then broken..." That would fit my requirements well. Here is an ideal situation: establish the LV mirror on the external drive break the link with the mirror create a (persistent) snapshot on the mirror after a week, resync the mirror with the original source and update the mirror break the link and create another snapshot on the mirror. Obviously, the mirror will be like a weekly full backup. And the snapshots on the mirror will represent earlier points in time. If this would work and if it would be time efficient, it would give a nice full & differential type backup on the external drive based on LVM. I have not heard of a strategy like this before. Will it work? Could it be scripted? Thoughts? EDIT 2: Creating Portable DiskSafes With LoopbackFS And LVM Snapshots This article seems intriguing: http://www.howtoforge.com/creating-portable-disksafes-with-loopbackfs-and-lvm-snapshots Unfortunately, I don't understand exactly how to map those ideas to the strategy I'm proposing above. I'm going to ask this last bit as a separate question. I will leave my original question in place because I still desire feedback on the overall best strategy. At this moment I'm assuming it is LVM mirroring in the style of "Creating Portable DiskSafes with LVM Snapshots" but that might be wrong.

    Read the article

  • Active Directory: trouble adding new DC

    - by ethrbunny
    I have a domain with 3 DCs. One is starting to fail so I brought up a new one. All are running Win 2003. Problem: there appear to be replication issues between the 4 machines but I can't figure out what's causing this. All are registered with the DNS as identically as I can make them. How do I know there is a problem? Nagios is telling me that the other 3 DCs are having KCCEvent errors and the new machine is reporting "failed connectivity" errors. Doing dcdiag on the new machine reports: the host could not be resolved to an IP address. This seems crazy as I log into it using the DNS name. I can ping it from the other three machines using this DNS name as well. repadmin /showreps from the new machine says its seeing the other 3 machines. Doing the same from one of the older machines doesn't show the new machine. I've tried netdiag /repair numerous times. No luck. There are no firewalls running on any of the machines. If I look at Domain info via MMC (on the new machine) it appears that all the information is current. Users, computers, DCs.. its all there. Im puzzled as to what step(s) I've missed in adding this new machine. Suggestions? EDIT: dcdiag from non-working: C:\Documents and Settings\Administrator.BME>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\YELLOW Starting test: Connectivity The host 312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu could not be resolved to an IP address. Check the DNS server, DHCP, server name, etc Although the Guid DNS name (312ce6ea-7909-4e15-aff6-45c3d1d9a0d9._msdcs.server.edu) couldn't be resolved, the server name (yellow.server.edu) resolved to the IP address (10.127.24.79) and was pingable. Check that the IP address is registered correctly with the DNS server. ......................... YELLOW failed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\YELLOW Skipping all tests, because server YELLOW is not responding to directory service requests Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck dcdiag from working: P:\>dcdiag Domain Controller Diagnosis Performing initial setup: Done gathering initial info. Doing initial required tests Testing server: Default-First-Site-Name\AD1 Starting test: Connectivity ......................... AD1 passed test Connectivity Doing primary tests Testing server: Default-First-Site-Name\AD1 Starting test: Replications ......................... AD1 passed test Replications Starting test: NCSecDesc ......................... AD1 passed test NCSecDesc Starting test: NetLogons ......................... AD1 passed test NetLogons Starting test: Advertising ......................... AD1 passed test Advertising Starting test: KnowsOfRoleHolders ......................... AD1 passed test KnowsOfRoleHolders Starting test: RidManager ......................... AD1 passed test RidManager Starting test: MachineAccount ......................... AD1 passed test MachineAccount Starting test: Services ......................... AD1 passed test Services Starting test: ObjectsReplicated ......................... AD1 passed test ObjectsReplicated Starting test: frssysvol ......................... AD1 passed test frssysvol Starting test: frsevent ......................... AD1 passed test frsevent Starting test: kccevent ......................... AD1 passed test kccevent Starting test: systemlog ......................... AD1 passed test systemlog Starting test: VerifyReferences ......................... AD1 passed test VerifyReferences Running partition tests on : Schema Starting test: CrossRefValidation ......................... Schema passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Schema passed test CheckSDRefDom Running partition tests on : Configuration Starting test: CrossRefValidation ......................... Configuration passed test CrossRefValidation Starting test: CheckSDRefDom ......................... Configuration passed test CheckSDRefDom Running partition tests on : bme Starting test: CrossRefValidation ......................... bme passed test CrossRefValidation Starting test: CheckSDRefDom ......................... bme passed test CheckSDRefDom Running enterprise tests on : server.edu Starting test: Intersite ......................... server.edu passed test Intersite Starting test: FsmoCheck ......................... server.edu passed test FsmoCheck P:\>

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • CodePlex Daily Summary for Saturday, February 20, 2010

    CodePlex Daily Summary for Saturday, February 20, 2010New ProjectsBerkeliumDotNet: BerkeliumDotNet is a .NET wrapper for the Berkelium library written in C++/CLI. Berkelium provides off-screen browser rendering via Google's Chromi...BoxBinary Descriptive WebCacheManager Framework: Allows you to take a simple, different, and more effective method of caching items in ASP.NET. The developer defines descriptive categories to deco...CHS Extranet: CHS Extranet Project, working with the RM Community to build the new RM Easylink type applicationDbModeller: Generates one class having properties with the basic C# types aimed to serve as a business object by using reflection from the passed objects insta...Dice.NET: Dice.NET is a simple dice roller for those cases where you just kinda forgot your real dices. It's very simple in setup/use.EBI App with the SQL Server CE and websync: EBI App with the SQL Server CE and SQL Server DEV with Merge Replication(Web Synchronization) ere we are trying to develop an application which yo...Family Tree Analyzer: This project with be a c# project which aims to allow users to import their GEDCOM files and produce various data analysis reports such as a list o...Go! Embedded Device Builder: Go! is a complete software engineering environment for the creation of embedded Linux devices. It enables you to engineer, design, develop, build, ...HiddenWordsReadingPlan: HiddenWordsReadingPlanHtml to OpenXml: A library to convert simple or advanced html to plain OpenXml document.Jeffrey Palermo's shared source: This project contains multiple samples with various snippets and projects from blog posts, user group talks, and conference sessions.Krypton Palette Selectors: A small C# control library that allows for simplified palette selection and management. It makes use of and relies on Component Factory's excellen...OCInject: A DI container on a diet. This is a basic DI container that lives in your project not an external assembly with support for auto generated delegat...Photo Organiser: A small utility to sort photos into a new file structure based on date held in their XMP or EXIF tags (YYYY/MM/DD/hhmmss.jpg). Developed in C# / WPF.QPAPrintLib: Print every document by its recommended programmReusable Library: A collection of reusable abstractions for enterprise application developer.Runtime Intelligence Data Visualizer: WPF application used to visualize Runtime Intelligence data using the Data Query Service from the Runtime Intelligence Endpoint Starter Kit.ScreenRec: ScreenRec is program for record your desktop and save to images or save one picture.Silverlight Internet Desktop Application Guidance: SLIDA (Silverlight Internet Desktop Applications) provides process guidance for developers creating Silverlight applications that run exclusively o...WSUS Web Administration: Web Application to remotely manage WSUSNew Releases7zbackup - PowerShell Script to Backup Files with 7zip: 7zBackup v. 1.7.0 Stable: Bug Solved : Test-Path-Writable fails on root of system drive on Windows 7. Therefore the function now accepts an optional parameter to specify if ...aqq: sec 1.02: Projeto SEC - Sistema economico Comercial - em Visual FoxPro 9.0 OpenSource ou seja gratis e com fontes, licença GNU/GPL para maiores informações e...ASP.NET MVC Attribute Based Route Mapper: Attribute Based Routes v0.2: ASP.NET MVC Attribute Based Route MapperBoxBinary Descriptive WebCacheManager Framework: Initial release: Initial assembly release for anyone wanting the files referenced in my talk at Umbraco's 5th Birthday London meetup 16/Feb/2010 The code is fairly...Build Version Increment Add-In Visual Studio: Build Version Increment v2.2 Beta: 2.2.10050.1548Added support for custom increment schemes via pluginsBuild Version Increment Add-In Visual Studio: BuildVersionIncrement v2.1: 2.1.10050.1458Fix: Localization issues Feature: Unmanaged C support Feature: Multi-Monitor support Feature: Global/Default settings Fix: De...CHS Extranet: Beta 2.3: Beta 2.3 Release Change Log: Fixed the update my details not updating the department/form Tried to fix the issue when the ampersand (&) is in t...Cover Creator: CoverCreator 1.2.2: Resolved BUG: If there is only one CD entry in freedb.org application do nothing. Instalation instructions Just unzip CoverCreator and run CoverCr...Employee Scheduler: Employee Scheduler 2.3: Extract the files to a directory and run Lab Hours.exe. Add an employee. Double click an employee to modify their times. Please contact me through ...EnOceanNet: EnOceanNet v1.11: Recompiled for .NET Framework 4 RCFree Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.3 beta 4 Released: Hi, This release contains fix for the following bugs: * DataBinding was not working as expected with RIA services. * DataSeries visual wa...Html to OpenXml: HtmlToOpenXml 0.1 Beta: This is a beta version for testing purpose.Jeffrey Palermo's shared source: Web Forms front controller: This code goes along with my blog post about adding code that executes before your web form http://jeffreypalermo.com/blog/add-post-backs-to-mvc-nd...Krypton Palette Selectors: Initial Release: The initial release. Contains only the KryptonPaletteDropButton control.LaunchMeNot: LaunchMeNot 1.10: Lots has been added in this release. Feel free, however, to suggest features you'd like on the forums. Changes in LaunchMeNot 1.10 (19th February ...Magellan: Magellan 1.1.36820.4796 Stable: This is a stable release. It contains a fix for a bug whereby the content of zones in a shared layout couldn't use element bindings (due to name sc...Magellan: Magellan 1.1.36883.4800 Stable: This release includes a couple of major changes: A new Forms object model. See this page for details. Magellan objects are now part of the defau...MAISGestão: LayerDiagram: LayerDiagramMatrix3DEx: Matrix3DEx 1.0.2.0: Fixes the SwapHandedness method. This release includes factory methods for all common transformation matrices like rotation, scaling, translation, ...MDownloader: MDownloader-0.15.1.55880: Fixed bugs.NewLineReplacer: QPANewLineReplacer 1.1: Replace letter fast and easy in great textfilesOAuthLib: OAuthLib (1.5.0.1): Difference between 1.5.0.0 is just version number.OCInject: First Release: First ReleasePhoto Organiser: Installer alpha release: First release - contains known bugs, but works for the most part.Pinger: Pinger-1.0.0.0 Source: The Latest and First Source CodePinger: Pinger-1.0.0.2 Binary: Hi, This version can! work on Older versions of windows than 7 but i haven't test it! tnxPinger: Pinger-1.0.0.2 Source: Hi, It's the raw source!Reusable Library: v1.0: A collection of reusable abstractions for enterprise application developer.ScreenRec: Version 1: One version of this programSense/Net Enterprise Portal & ECMS: SenseNet 6.0 Beta 5: Sense/Net 6.0 Beta 5 Release Notes We are proud to finally present you Sense/Net 6.0 Beta 5, a major feature overhaul over beta43, and hopefully th...Silverlight Internet Desktop Application Guidance: v1: Project templates for Silverlight IDA and Silverlight Navigation IDA.SLAM! SharePoint List Association Manager: SLAM v1.3: The SharePoint List Association Manager is a platform for managing lists in SharePoint in a relational manner. The SLAM Hierarchy Extension works ...SQL Server PowerShell Extensions: 2.0.2 Production: Release 2.0.1 re-implements SQLPSX as PowersShell version 2.0 modules. SQLPSX consists of 8 modules with 133 advanced functions, 2 cmdlets and 7 sc...StoryQ: StoryQ 2.0.2 Library and Converter UI: Fixes: 16086 This release includes the following files: StoryQ.dll - the actual StoryQ library for you to reference StoryQ.xml - the xmldoc for ...Text to HTML: 0.4.0 beta: Cambios de la versión:Inclusión de los idiomas castellano, inglés y francés. Adición de una ventana de configuración. Carga dinámica de variabl...thor: Version 1.1: What's New in Version 1.1Specify whether or not to display the subject of appointments on a calendar Specify whether or not to use a booking agen...TweeVo: Tweet What Your TiVo Is Recording: TweeVo v1.0: TweeVo v1.0VCC: Latest build, v2.1.30219.0: Automatic drop of latest buildVFPnfe: Arquivo xml gerado manualmente: Segue um aquivo que gera o xml para NF-e de forma manual, estou trabalhando na versão 1.1 deste projeto, aguarde, enquanto isso baixe outro projeto...Windows Double Explorer: WDE v0.3.7: -optimization -locked tabs can be reset to locked directory (single & multi) -folder drag drop to tabcontrol creates new tab -splash screen -direcl...WPF ShaderEffect Generator: WPF ShaderEffect Generator 1.5: Visual Studio 2008 and RC 2010 are now supported. Different profiles can now be used to compile the shader with. ChangesVisual Studio RC 2010 is ...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)Image Resizer Powertoy Clone for WindowsASP.NETDotNetNuke® Community EditionMicrosoft SQL Server Community & SamplesMost Active ProjectsDinnerNow.netRawrSharpyBlogEngine.NETSharePoint ContribjQuery Library for SharePoint Web ServicesNB_Store - Free DotNetNuke Ecommerce Catalog Modulepatterns & practices – Enterprise LibraryPHPExcelFluent Ribbon Control Suite

    Read the article

  • Conheça a nova Windows Azure

    - by Leniel Macaferi
    Hoje estamos lançando um grande conjunto de melhorias para a Windows Azure. A seguir está um breve resumo de apenas algumas destas melhorias: Novo Portal de Administração e Ferramentas de Linha de Comando O lançamento de hoje vem com um novo portal para a Windows Azure, o qual lhe permitirá gerenciar todos os recursos e serviços oferecidos na Windows Azure de uma forma perfeitamente integrada. O portal é muito rápido e fluido, suporta filtragem e classificação dos dados (o que o torna muito fácil de usar em implantações/instalações de grande porte), funciona em todos os navegadores, e oferece um monte de ótimos e novos recursos - incluindo suporte nativo à VM (máquina virtual), Web site, Storage (armazenamento), e monitoramento de Serviços hospedados na Nuvem. O novo portal é construído em cima de uma API de gerenciamento baseada no modelo REST dentro da Windows Azure - e tudo o que você pode fazer através do portal também pode ser feito através de programação acessando esta Web API. Também estamos lançando hoje ferramentas de linha de comando (que, igualmente ao portal, chamam as APIs de Gerenciamento REST) para tornar ainda ainda mais fácil a criação de scripts e a automatização de suas tarefas de administração. Estamos oferecendo para download um conjunto de ferramentas para o Powershell (Windows) e Bash (Mac e Linux). Como nossos SDKs, o código destas ferramentas está hospedado no GitHub sob uma licença Apache 2. Máquinas Virtuais ( Virtual Machines [ VM ] ) A Windows Azure agora suporta a capacidade de implantar e executar VMs duráveis/permanentes ??na nuvem. Você pode criar facilmente essas VMs usando uma nova Galeria de Imagens embutida no novo Portal da Windows Azure ou, alternativamente, você pode fazer o upload e executar suas próprias imagens VHD customizadas. Máquinas virtuais são duráveis ??(o que significa que qualquer coisa que você instalar dentro delas persistirá entre as reinicializações) e você pode usar qualquer sistema operacional nelas. Nossa galeria de imagens nativa inclui imagens do Windows Server (incluindo o novo Windows Server 2012 RC), bem como imagens do Linux (incluindo Ubuntu, CentOS, e as distribuições SUSE). Depois de criar uma instância de uma VM você pode facilmente usar o Terminal Server ou SSH para acessá-las a fim de configurar e personalizar a máquina virtual da maneira como você quiser (e, opcionalmente, capturar uma snapshot (cópia instantânea da imagem atual) para usar ao criar novas instâncias de VMs). Isto te proporciona a flexibilidade de executar praticamente qualquer carga de trabalho dentro da plataforma Windows Azure.   A novo Portal da Windows Azure fornece um rico conjunto de recursos para o gerenciamento de Máquinas Virtuais - incluindo a capacidade de monitorar e controlar a utilização dos recursos dentro delas.  Nosso novo suporte à Máquinas Virtuais também permite a capacidade de facilmente conectar múltiplos discos nas VMs (os quais você pode então montar e formatar como unidades de disco). Opcionalmente, você pode ativar o suporte à replicação geográfica (geo-replication) para estes discos - o que fará com que a Windows Azure continuamente replique o seu armazenamento em um data center secundário (criando um backup), localizado a pelo menos 640 quilômetros de distância do seu data-center principal. Nós usamos o mesmo formato VHD que é suportado com a virtualização do Windows hoje (o qual nós lançamos como uma especificação aberta), de modo a permitir que você facilmente migre cargas de trabalho existentes que você já tenha virtualizado na Windows Azure.  Também tornamos fácil fazer o download de VHDs da Windows Azure, o que também oferece a flexibilidade para facilmente migrar cargas de trabalho das VMs baseadas na nuvem para um ambiente local. Tudo o que você precisa fazer é baixar o arquivo VHD e inicializá-lo localmente - nenhuma etapa de importação/exportação é necessária. Web Sites A Windows Azure agora suporta a capacidade de rapidamente e facilmente implantar web-sites ASP.NET, Node.js e PHP em um ambiente na nuvem altamente escalável que te permite começar pequeno (e de maneira gratuita) de modo que você possa em seguida, adaptar/escalar sua aplicação de acordo com o crescimento do seu tráfego. Você pode criar um novo web site na Azure e tê-lo pronto para implantação em menos de 10 segundos: O novo Portal da Windows Azure oferece suporte integrado para a administração de Web sites, incluindo a capacidade de monitorar e acompanhar a utilização dos recursos em tempo real: Você pode fazer o deploy (implantação) para web-sites em segundos usando FTP, Git, TFS e Web Deploy. Também estamos lançando atualizações para as ferramentas do Visual Studio e da Web Matrix que permitem aos desenvolvedores uma fácil instalação das aplicações ASP.NET nesta nova oferta. O suporte de publicação do VS e da Web Matrix inclui a capacidade de implantar bancos de dados SQL como parte da implantação do site - bem como a capacidade de realizar a atualização incremental do esquema do banco de dados com uma implantação realizada posteriormente. Você pode integrar a publicação de aplicações web com o controle de código fonte ao selecionar os links "Set up TFS publishing" (Configurar publicação TFS) ou "Set up Git publishing" (Configurar publicação Git) que estão presentes no dashboard de um web-site: Ao fazer isso, você habilitará a integração com o nosso novo serviço online TFS (que permite um fluxo de trabalho do TFS completo - incluindo um build elástico e suporte a testes), ou você pode criar um repositório Git e referenciá-lo como um remote para executar implantações automáticas. Uma vez que você executar uma implantação usando TFS ou Git, a tab/guia de implantações/instalações irá acompanhar as implantações que você fizer, e permitirá que você selecione uma implantação mais antiga (ou mais recente) para que você possa rapidamente voltar o seu site para um estado anterior do seu código. Isso proporciona uma experiência de fluxo de trabalho muito poderosa.   A Windows Azure agora permite que você implante até 10 web-sites em um ambiente de hospedagem gratuito e compartilhado entre múltiplos usuários e bancos de dados (onde um site que você implantar será um dos vários sites rodando em um conjunto compartilhado de recursos do servidor). Isso te fornece uma maneira fácil para começar a desenvolver projetos sem nenhum custo envolvido. Você pode, opcionalmente, fazer o upgrade do seus sites para que os mesmos sejam executados em um "modo reservado" que os isola, de modo que você seja o único cliente dentro de uma máquina virtual: E você pode adaptar elasticamente a quantidade de recursos que os seus sites utilizam - o que te permite por exemplo aumentar a capacidade da sua instância reservada/particular de acordo com o aumento do seu tráfego: A Windows Azure controla automaticamente o balanceamento de carga do tráfego entre as instâncias das VMs, e você tem as mesmas opções de implantação super rápidas (FTP, Git, TFS e Web Deploy), independentemente de quantas instâncias reservadas você usar. Com a Windows Azure você paga por capacidade de processamento por hora - o que te permite dimensionar para cima e para baixo seus recursos para atender apenas o que você precisa. Serviços da Nuvem (Cloud Services) e Cache Distribuído (Distributed Caching) A Windows Azure também suporta a capacidade de construir serviços que rodam na nuvem que suportam ricas arquiteturas multicamadas, gerenciamento automatizado de aplicações, e que podem ser adaptados para implantações extremamente grandes. Anteriormente nós nos referíamos a esta capacidade como "serviços hospedados" - com o lançamento desta semana estamos agora rebatizando esta capacidade como "serviços da nuvem". Nós também estamos permitindo um monte de novos recursos com eles. Cache Distribuído Um dos novos recursos muito legais que estão sendo habilitados com os serviços da nuvem é uma nova capacidade de cache distribuído que te permite usar e configurar um cache distribuído de baixa latência, armazenado na memória (in-memory) dentro de suas aplicações. Esse cache é isolado para uso apenas por suas aplicações, e não possui limites de corte. Esse cache pode crescer e diminuir dinamicamente e elasticamente (sem que você tenha que reimplantar a sua aplicação ou fazer alterações no código), e suporta toda a riqueza da API do Servidor de Cache AppFabric (incluindo regiões, alta disponibilidade, notificações, cache local e muito mais). Além de suportar a API do Servidor de Cache AppFabric, esta nova capacidade de cache pode agora também suportar o protocolo Memcached - o que te permite apontar código escrito para o Memcached para o cache distribuído (sem que alterações de código sejam necessárias). O novo cache distribuído pode ser configurado para ser executado em uma de duas maneiras: 1) Utilizando uma abordagem de cache co-localizado (co-located). Nesta opção você aloca um percentual de memória dos seus roles web e worker existentes para que o mesmo seja usado ??pelo cache, e então o cache junta a memória em um grande cache distribuído.  Qualquer dado colocado no cache por uma instância do role pode ser acessado por outras instâncias do role em sua aplicação - independentemente de os dados cacheados estarem armazenados neste ou em outro role. O grande benefício da opção de cache "co-localizado" é que ele é gratuito (você não precisa pagar nada para ativá-lo) e ele te permite usar o que poderia ser de outra forma memória não utilizada dentro das VMs da sua aplicação. 2) Alternativamente, você pode adicionar "cache worker roles" no seu serviço na nuvem que são utilizados unicamente para o cache. Estes também serão unidos em um grande anel de cache distribuído que outros roles dentro da sua aplicação podem acessar. Você pode usar esses roles para cachear dezenas ou centenas de GBs de dados na memória de forma extramente eficaz - e o cache pode ser aumentado ou diminuído elasticamente durante o tempo de execução dentro da sua aplicação: Novos SDKs e Ferramentas de Suporte Nós atualizamos todos os SDKs (kits para desenvolvimento de software) da Windows Azure com o lançamento de hoje para incluir novos recursos e capacidades. Nossos SDKs estão agora disponíveis em vários idiomas, e todo o código fonte deles está publicado sob uma licença Apache 2 e é mantido em repositórios no GitHub. O SDK .NET para Azure tem em particular um monte de grandes melhorias com o lançamento de hoje, e agora inclui suporte para ferramentas, tanto para o VS 2010 quanto para o VS 2012 RC. Estamos agora também entregando downloads do SDK para Windows, Mac e Linux nos idiomas que são oferecidos em todos esses sistemas - de modo a permitir que os desenvolvedores possam criar aplicações Windows Azure usando qualquer sistema operacional durante o desenvolvimento. Muito, Muito Mais O resumo acima é apenas uma pequena lista de algumas das melhorias que estão sendo entregues de uma forma preliminar ou definitiva hoje - há muito mais incluído no lançamento de hoje. Dentre estas melhorias posso citar novas capacidades para Virtual Private Networking (Redes Privadas Virtuais), novo runtime do Service Bus e respectivas ferramentas de suporte, o preview público dos novos Azure Media Services, novos Data Centers, upgrade significante para o hardware de armazenamento e rede, SQL Reporting Services, novos recursos de Identidade, suporte para mais de 40 novos países e territórios, e muito, muito mais. Você pode aprender mais sobre a Windows Azure e se cadastrar para experimentá-la gratuitamente em http://windowsazure.com.  Você também pode assistir a uma apresentação ao vivo que estarei realizando às 1pm PDT (17:00Hs de Brasília), hoje 7 de Junho (hoje mais tarde), onde eu vou passar por todos os novos recursos. Estaremos abrindo as novas funcionalidades as quais me referi acima para uso público poucas horas após o término da apresentação. Nós estamos realmente animados para ver as grandes aplicações que você construirá com estes novos recursos. Espero que ajude, - Scott   Texto traduzido do post original por Leniel Macaferi.

    Read the article

  • Faceted search with Solr on Windows

    - by Dr.NETjes
    With over 10 million hits a day, funda.nl is probably the largest ASP.NET website which uses Solr on a Windows platform. While all our data (i.e. real estate properties) is stored in SQL Server, we're using Solr 1.4.1 to return the faceted search results as fast as we can.And yes, Solr is very fast. We did do some heavy stress testing on our Solr service, which allowed us to do over 1,000 req/sec on a single 64-bits Solr instance; and that's including converting search-url's to Solr http-queries and deserializing Solr's result-XML back to .NET objects! Let me tell you about faceted search and how to integrate Solr in a .NET/Windows environment. I'll bet it's easier than you think :-) What is faceted search? Faceted search is the clustering of search results into categories, allowing users to drill into search results. By showing the number of hits for each facet category, users can easily see how many results match that category. If you're still a bit confused, this example from CNET explains it all: The SQL solution for faceted search Our ("pre-Solr") solution for faceted search was done by adding a lot of redundant columns to our SQL tables and doing a COUNT(...) for each of those columns:   So if a user was searching for real estate properties in the city 'Amsterdam', our facet-query would be something like: SELECT COUNT(hasGarden), COUNT(has2Bathrooms), COUNT(has3Bathrooms), COUNT(etc...) FROM Houses WHERE city = 'Amsterdam' While this solution worked fine for a couple of years, it wasn't very easy for developers to add new facets. And also, performing COUNT's on all matched rows only performs well if you have a limited amount of rows in a table (i.e. less than a million). Enter Solr "Solr is an open source enterprise search server based on the Lucene Java search library, with XML/HTTP and JSON APIs, hit highlighting, faceted search, caching, replication, and a web administration interface." (quoted from Wikipedia's page on Solr) Solr isn't a database, it's more like a big index. Every time you upload data to Solr, it will analyze the data and create an inverted index from it (like the index-pages of a book). This way Solr can lookup data very quickly. To explain the inner workings of Solr is beyond the scope of this post, but if you want to learn more, please visit the Solr Wiki pages. Getting faceted search results from Solr is very easy; first let me show you how to send a http-query to Solr:    http://localhost:8983/solr/select?q=city:Amsterdam This will return an XML document containing the search results (in this example only three houses in the city of Amsterdam):    <response>     <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>        </doc>         <doc>             <long name="id">3205</long>             <str name="city">Amsterdam</str>             <str name="steet">Vondelstraat</str>             <int name="numberOfBathrooms">3</int>          </doc>          <doc>             <long name="id">4293</long>             <str name="city">Amsterdam</str>             <str name="steet">Wibautstraat</str>             <int name="numberOfBathrooms">2</int>          </doc>       </result>   </response> By adding a facet-querypart for the field "numberOfBathrooms", Solr will return the facets for this particular field. We will see that there's one house in Amsterdam with three bathrooms and two houses with two bathrooms.    http://localhost:8983/solr/select?q=city:Amsterdam&facet=true&facet.field=numberOfBathrooms The complete XML response from Solr now looks like:    <response>      <result name="response" numFound="3" start="0">         <doc>            <long name="id">3203</long>            <str name="city">Amsterdam</str>            <str name="steet">Keizersgracht</str>            <int name="numberOfBathrooms">2</int>         </doc>         <doc>            <long name="id">3205</long>            <str name="city">Amsterdam</str>            <str name="steet">Vondelstraat</str>            <int name="numberOfBathrooms">3</int>         </doc>         <doc>            <long name="id">4293</long>            <str name="city">Amsterdam</str>            <str name="steet">Wibautstraat</str>            <int name="numberOfBathrooms">2</int>         </doc>      </result>      <lst name="facet_fields">         <lst name="numberOfBathrooms">            <int name="2">2</int>            <int name="3">1</int>         </lst>      </lst>   </response> Trying Solr for yourself To run Solr on your local machine and experiment with it, you should read the Solr tutorial. This tutorial really only takes 1 hour, in which you install Solr, upload sample data and get some query results. And yes, it works on Windows without a problem. Note that in the Solr tutorial, you're using Jetty as a Java Servlet Container (that's why you must start it using "java -jar start.jar"). In our environment we prefer to use Apache Tomcat to host Solr, which installs like a Windows service and works more like .NET developers expect. See the SolrTomcat page.Some best practices for running Solr on Windows: Use the 64-bits version of Tomcat. In our tests, this doubled the req/sec we were able to handle!Use a .NET XmlReader to convert Solr's XML output-stream to .NET objects. Don't use XPath; it won't scale well.Use filter queries ("fq" parameter) instead of the normal "q" parameter where possible. Filter queries are cached by Solr and will speed up Solr's response time (see FilterQueryGuidance)In my next post I’ll talk about how to keep Solr's indexed data in sync with the data in your SQL tables. Timestamps / rowversions will help you out here!

    Read the article

  • T-SQL Tuesday #33: Trick Shots: Undocumented, Underdocumented, and Unknown Conspiracies!

    - by Most Valuable Yak (Rob Volk)
    Mike Fal (b | t) is hosting this month's T-SQL Tuesday on Trick Shots.  I love this choice because I've been preoccupied with sneaky/tricky/evil SQL Server stuff for a long time and have been presenting on it for the past year.  Mike's directives were "Show us a cool trick or process you developed…It doesn’t have to be useful", which most of my blogging definitely fits, and "Tell us what you learned from this trick…tell us how it gave you insight in to how SQL Server works", which is definitely a new concept.  I've done a lot of reading and watching on SQL Server Internals and even attended training, but sometimes I need to go explore on my own, using my own tools and techniques.  It's an itch I get every few months, and, well, it sure beats workin'. I've found some people to be intimidated by SQL Server's internals, and I'll admit there are A LOT of internals to keep track of, but there are tons of excellent resources that clearly document most of them, and show how knowing even the basics of internals can dramatically improve your database's performance.  It may seem like rocket science, or even brain surgery, but you don't have to be a genius to understand it. Although being an "evil genius" can help you learn some things they haven't told you about. ;) This blog post isn't a traditional "deep dive" into internals, it's more of an approach to find out how a program works.  It utilizes an extremely handy tool from an even more extremely handy suite of tools, Sysinternals.  I'm not the only one who finds Sysinternals useful for SQL Server: Argenis Fernandez (b | t), Microsoft employee and former T-SQL Tuesday host, has an excellent presentation on how to troubleshoot SQL Server using Sysinternals, and I highly recommend it.  Argenis didn't cover the Strings.exe utility, but I'll be using it to "hack" the SQL Server executable (DLL and EXE) files. Please note that I'm not promoting software piracy or applying these techniques to attack SQL Server via internal knowledge. This is strictly educational and doesn't reveal any proprietary Microsoft information.  And since Argenis works for Microsoft and demonstrated Sysinternals with SQL Server, I'll just let him take the blame for it. :P (The truth is I've used Strings.exe on SQL Server before I ever met Argenis.) Once you download and install Strings.exe you can run it from the command line.  For our purposes we'll want to run this in the Binn folder of your SQL Server instance (I'm referencing SQL Server 2012 RTM): cd "C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn" C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.dll > sqldll.txt C:\Program Files\Microsoft SQL Server\MSSQL11\MSSQL\Binn> strings *sql*.exe > sqlexe.txt   I've limited myself to DLLs and EXEs that have "sql" in their names.  There are quite a few more but I haven't examined them in any detail. (Homework assignment for you!) If you run this yourself you'll get 2 text files, one with all the extracted strings from every SQL DLL file, and the other with the SQL EXE strings.  You can open these in Notepad, but you're better off using Notepad++, EditPad, Emacs, Vim or another more powerful text editor, as these will be several megabytes in size. And when you do open it…you'll find…a TON of gibberish.  (If you think that's bad, just try opening the raw DLL or EXE file in Notepad.  And by the way, don't do this in production, or even on a running instance of SQL Server.)  Even if you don't clean up the file, you can still use your editor's search function to find a keyword like "SELECT" or some other item you expect to be there.  As dumb as this sounds, I sometimes spend my lunch break just scanning the raw text for anything interesting.  I'm boring like that. Sometimes though, having these files available can lead to some incredible learning experiences.  For me the most recent time was after reading Joe Sack's post on non-parallel plan reasons.  He mentions a new SQL Server 2012 execution plan element called NonParallelPlanReason, and demonstrates a query that generates "MaxDOPSetToOne".  Joe (formerly on the Microsoft SQL Server product team, so he knows this stuff) mentioned that this new element was not currently documented and tried a few more examples to see what other reasons could be generated. Since I'd already run Strings.exe on the SQL Server DLLs and EXE files, it was easy to run grep/find/findstr for MaxDOPSetToOne on those extracts.  Once I found which files it belonged to (sqlmin.dll) I opened the text to see if the other reasons were listed.  As you can see in my comment on Joe's blog, there were about 20 additional non-parallel reasons.  And while it's not "documentation" of this underdocumented feature, the names are pretty self-explanatory about what can prevent parallel processing. I especially like the ones about cursors – more ammo! - and am curious about the PDW compilation and Cloud DB replication reasons. One reason completely stumped me: NoParallelHekatonPlan.  What the heck is a hekaton?  Google and Wikipedia were vague, and the top results were not in English.  I found one reference to Greek, stating "hekaton" can be translated as "hundredfold"; with a little more Wikipedia-ing this leads to hecto, the prefix for "one hundred" as a unit of measure.  I'm not sure why Microsoft chose hekaton for such a plan name, but having already learned some Greek I figured I might as well dig some more in the DLL text for hekaton.  Here's what I found: hekaton_slow_param_passing Occurs when a Hekaton procedure call dispatch goes to slow parameter passing code path The reason why Hekaton parameter passing code took the slow code path hekaton_slow_param_pass_reason sp_deploy_hekaton_database sp_undeploy_hekaton_database sp_drop_hekaton_database sp_checkpoint_hekaton_database sp_restore_hekaton_database e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\hkproc.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matgen.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\matquery.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\sqlmeta.cpp e:\sql11_main_t\sql\ntdbms\hekaton\sqlhost\sqllang\resultset.cpp Interesting!  The first 4 entries (in red) mention parameters and "slow code".  Could this be the foundation of the mythical DBCC RUNFASTER command?  Have I been passing my parameters the slow way all this time? And what about those sp_xxxx_hekaton_database procedures (in blue)? Could THEY be the secret to a faster SQL Server? Could they promise a "hundredfold" improvement in performance?  Are these special, super-undocumented DIB (databases in black)? I decided to look in the SQL Server system views for any objects with hekaton in the name, or references to them, in hopes of discovering some new code that would answer all my questions: SELECT name FROM sys.all_objects WHERE name LIKE '%hekaton%' SELECT name FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%hekaton%' Which revealed: name ------------------------ (0 row(s) affected) name ------------------------ sp_createstats sp_recompile sp_updatestats (3 row(s) affected)   Hmm.  Well that didn't find much.  Looks like these procedures are seriously undocumented, unknown, perhaps forbidden knowledge. Maybe a part of some unspeakable evil? (No, I'm not paranoid, I just like mysteries and thought that punching this up with that kind of thing might keep you reading.  I know I'd fall asleep without it.) OK, so let's check out those 3 procedures and see what they reveal when I search for "Hekaton": sp_createstats: -- filter out local temp tables, Hekaton tables, and tables for which current user has no permissions -- Note that OBJECTPROPERTY returns NULL on type="IT" tables, thus we only call it on type='U' tables   OK, that's interesting, let's go looking down a little further: ((@table_type<>'U') or (0 = OBJECTPROPERTY(@table_id, 'TableIsInMemory'))) and -- Hekaton table   Wellllll, that tells us a few new things: There's such a thing as Hekaton tables (UPDATE: I'm not the only one to have found them!) They are not standard user tables and probably not in memory UPDATE: I misinterpreted this because I didn't read all the code when I wrote this blog post. The OBJECTPROPERTY function has an undocumented TableIsInMemory option Let's check out sp_recompile: -- (3) Must not be a Hekaton procedure.   And once again go a little further: if (ObjectProperty(@objid, 'IsExecuted') <> 0 AND ObjectProperty(@objid, 'IsInlineFunction') = 0 AND ObjectProperty(@objid, 'IsView') = 0 AND -- Hekaton procedure cannot be recompiled -- Make them go through schema version bumping branch, which will fail ObjectProperty(@objid, 'ExecIsCompiledProc') = 0)   And now we learn that hekaton procedures also exist, they can't be recompiled, there's a "schema version bumping branch" somewhere, and OBJECTPROPERTY has another undocumented option, ExecIsCompiledProc.  (If you experiment with this you'll find this option returns null, I think it only works when called from a system object.) This is neat! Sadly sp_updatestats doesn't reveal anything new, the comments about hekaton are the same as sp_createstats.  But we've ALSO discovered undocumented features for the OBJECTPROPERTY function, which we can now search for: SELECT name, object_definition(OBJECT_ID) FROM sys.all_objects WHERE object_definition(OBJECT_ID) LIKE '%OBJECTPROPERTY(%'   I'll leave that to you as more homework.  I should add that searching the system procedures was recommended long ago by the late, great Ken Henderson, in his Guru's Guide books, as a great way to find undocumented features.  That seems to be really good advice! Now if you're a programmer/hacker, you've probably been drooling over the last 5 entries for hekaton (in green), because these are the names of source code files for SQL Server!  Does this mean we can access the source code for SQL Server?  As The Oracle suggested to Neo, can we return to The Source??? Actually, no. Well, maybe a little bit.  While you won't get the actual source code from the compiled DLL and EXE files, you'll get references to source files, debugging symbols, variables and module names, error messages, and even the startup flags for SQL Server.  And if you search for "DBCC" or "CHECKDB" you'll find a really nice section listing all the DBCC commands, including the undocumented ones.  Granted those are pretty easy to find online, but you may be surprised what those web sites DIDN'T tell you! (And neither will I, go look for yourself!)  And as we saw earlier, you'll also find execution plan elements, query processing rules, and who knows what else.  It's also instructive to see how Microsoft organizes their source directories, how various components (storage engine, query processor, Full Text, AlwaysOn/HADR) are split into smaller modules. There are over 2000 source file references, go do some exploring! So what did we learn?  We can pull strings out of executable files, search them for known items, browse them for unknown items, and use the results to examine internal code to learn even more things about SQL Server.  We've even learned how to use command-line utilities!  We are now 1337 h4X0rz!  (Not really.  I hate that leetspeak crap.) Although, I must confess I might've gone too far with the "conspiracy" part of this post.  I apologize for that, it's just my overactive imagination.  There's really no hidden agenda or conspiracy regarding SQL Server internals.  It's not The Matrix.  It's not like you'd find anything like that in there: Attach Matrix Database DM_MATRIX_COMM_PIPELINES MATRIXXACTPARTICIPANTS dm_matrix_agents   Alright, enough of this paranoid ranting!  Microsoft are not really evil!  It's not like they're The Borg from Star Trek: ALTER FEDERATION DROP ALTER FEDERATION SPLIT DROP FEDERATION   #tsql2sday

    Read the article

  • Clustering Basics and Challenges

    - by Karoly Vegh
    For upcoming posts it seemed to be a good idea to dedicate some time for cluster basic concepts and theory. This post misses a lot of details that would explode the articlesize, should you have questions, do not hesitate to ask them in the comments.  The goal here is to get some concepts straight. I can't promise to give you an overall complete definitions of cluster, cluster agent, quorum, voting, fencing, split brain condition, so the following is more of an explanation. Here we go. -------- Cluster, HA, failover, switchover, scalability -------- An attempted definition of a Cluster: A cluster is a set (2+) server nodes dedicated to keep application services alive, communicating through the cluster software/framework with eachother, test and probe health status of servernodes/services and with quorum based decisions and with switchover/failover techniques keep the application services running on them available. That is, should a node that runs a service unexpectedly lose functionality/connection, the other ones would take over the and run the services, so that availability is guaranteed. To provide availability while strictly sticking to a consistent clusterconfiguration is the main goal of a cluster.  At this point we have to add that this defines a HA-cluster, a High-Availability cluster, where the clusternodes are planned to run the services in an active-standby, or failover fashion. An example could be a single instance database. Some applications can be run in a distributed or scalable fashion. In the latter case instances of the application run actively on separate clusternodes serving servicerequests simultaneously. An example for this version could be a webserver that forwards connection requests to many backend servers in a round-robin way. Or a database running in active-active RAC setup.  -------- Cluster arhitecture, interconnect, topologies -------- Now, what is a cluster made of? Servers, right. These servers (the clusternodes) need to communicate. This of course happens over the network, usually over dedicated network interfaces interconnecting all the clusternodes. These connection are called interconnects.How many clusternodes are in a cluster? There are different cluster topologies. The most simple one is a clustered pair topology, involving only two clusternodes:  There are several more topologies, clicking the image above will take you to the relevant documentation. Also, to answer the question Solaris Cluster allows you to run up to 16 servers in a cluster. Where shall these clusternodes be placed? A very important question. The right answer is: It depends on what you plan to achieve with the cluster. Do you plan to avoid only a server outage? Then you can place them right next to eachother in the datacenter. Do you need to avoid DataCenter outage? In that case of course you should place them at least in different fire zones. Or in two geographically distant DataCenters to avoid disasters like floods, large-scale fires or power outages. We call this a stretched- or campus cluster, the clusternodes being several kilometers away from eachother. To cover really large distances, you probably need to move to a GeoCluster, which is a different kind of animal.  What is a geocluster? A Geographic Cluster in Solaris Cluster terms is actually a metacluster between two, separate (locally-HA) clusters.  -------- Cluster resource types, agents, resources, resource groups -------- So how does the cluster manage my applications? The cluster needs to start, stop and probe your applications. If you application runs, the cluster needs to check regularly if the application state is healthy, does it respond over the network, does it have all the processes running, etc. This is called probing. If the cluster deems the application is in a faulty state, then it can try to restart it locally or decide to switch (stop on node A, start on node B) the service. Starting, stopping and probing are the three actions that a cluster agent does. There are many different kinds of agents included in Solaris Cluster, but you can build your own too. Examples are an agent that manages (mounts, moves) ZFS filesystems, or the Oracle DB HA agent that cares about the database, or an agent that moves a floating IP address between nodes. There are lots of other agents included for Apache, Tomcat, MySQL, Oracle DB, Oracle Weblogic, Zones, LDoms, NFS, DNS, etc.We also need to clarify the difference between a cluster resource and the cluster resource group.A cluster resource is something that is managed by a cluster agent. Cluster resource types are included in Solaris cluster (see above, e.g. HAStoragePlus, HA-Oracle, LogicalHost). You can group cluster resources into cluster resourcegroups, and switch these groups together from one node to another. To stick to the example above, to move an Oracle DB service from one node to another, you have to switch the group between nodes, and the agents of the cluster resources in the group will do the following:  On node A Shut down the DB Unconfigure the LogicalHost IP the DB Listener listens on unmount the filesystem   Then, on node B: mount the FS configure the IP  startup the DB -------- Voting, Quorum, Split Brain Condition, Fencing, Amnesia -------- How do the clusternodes agree upon their action? How do they decide which node runs what services? Another important question. Running a cluster is a strictly democratic thing.Every node has votes, and you need the majority of votes to have the deciding power. Now, this is usually no problem, clusternodes think very much all alike. Still, every action needs to be governed upon in a productive system, and has to be agreed upon. Agreeing is easy as long as the clusternodes all behave and talk to eachother over the interconnect. But if the interconnect is gone/down, this all gets tricky and confusing. Clusternodes think like this: "My job is to run these services. The other node does not answer my interconnect communication, it must be down. I'd better take control and run the services!". The problem is, as I have already mentioned, clusternodes very much think alike. If the interconnect is gone, they all assume the other node is down, and they all want to mount the data backend, enable the IP and run the database. Double IPs, double mounts, double DB instances - now that is trouble. Also, in a 2-node cluster they both have only 50% of the votes, that is, they themselves alone are not allowed to run a cluster.  This is where you need a quorum device. According to Wikipedia, the "requirement for a quorum is protection against totally unrepresentative action in the name of the body by an unduly small number of persons.". They need additional votes to run the cluster. For this requirement a 2-node cluster needs a quorum device or a quorum server. If the interconnect is gone, (this is what we call a split brain condition) both nodes start to race and try to reserve the quorum device to themselves. They do this, because the quorum device bears an additional vote, that could ensure majority (50% +1). The one that manages to lock the quorum device (e.g. if it's an FC LUN, it SCSI reserves it) wins the right to build/run a cluster, the other one - realizing he was late - panics/reboots to ensure the cluster config stays consistent.  Losing the interconnect isn't only endangering the availability of services, but it also endangers the cluster configuration consistence. Just imagine node A being down and during that the cluster configuration changes. Now node B goes down, and node A comes up. It isn't uptodate about the cluster configuration's changes so it will refuse to start a cluster, since that would lead to cluster amnesia, that is the cluster had some changes, but now runs with an older cluster configuration repository state, that is it's like it forgot about the changes.  Also, to ensure application data consistence, the clusternode that wins the race makes sure that a server that isn't part of or can't currently join the cluster can access the devices. This procedure is called fencing. This usually happens to storage LUNs via SCSI reservation.  Now, another important question: Where do I place the quorum disk?  Imagine having two sites, two separate datacenters, one in the north of the city and the other one in the south part of it. You run a stretched cluster in the clustered pair topology. Where do you place the quorum disk/server? If you put it into the north DC, and that gets hit by a meteor, you lose one clusternode, which isn't a problem, but you also lose your quorum, and the south clusternode can't keep the cluster running lacking the votes. This problem can't be solved with two sites and a campus cluster. You will need a third site to either place the quorum server to, or a third clusternode. Otherwise, lacking majority, if you lose the site that had your quorum, you lose the cluster. Okay, we covered the very basics. We haven't talked about virtualization support, CCR, ClusterFilesystems, DID devices, affinities, storage-replication, management tools, upgrade procedures - should those be interesting for you, let me know in the comments, along with any other questions. Given enough demand I'd be glad to write a followup post too. Now I really want to move on to the second part in the series: ClusterInstallation.  Oh, as for additional source of information, I recommend the documentation: http://docs.oracle.com/cd/E23623_01/index.html, and the OTN Oracle Solaris Cluster site: http://www.oracle.com/technetwork/server-storage/solaris-cluster/index.html

    Read the article

  • How to Avoid Your Next 12-Month Science Project

    - by constant
    While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack. After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades? On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea. Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company. Engineering Systems is Hard Work! The backbone of Exalogic is its InfiniBand network: 4 times better bandwidth than even 10 Gigabit Ethernet, and only about a tenth of its latency. What a potential for increased scalability and throughput across the middleware and database layers! But InfiniBand is a beast that needs to be tamed: It is true that Exalogic uses a standard, open-source Open Fabrics Enterprise Distribution (OFED) InfiniBand driver stack. Unfortunately, this software has been developed by the HPC community with fastest speed in mind (which is good) but, despite the name, not many other enterprise-class requirements are included (which is less good). Here are some of the improvements that Oracle's InfiniBand development team had to add to the OFED stack to make it enterprise-ready, simply because typical HPC users didn't have the need to implement them: More than 100 bug fixes in the pieces that were not related to the Message Passing Interface Protocol (MPI), which is the protocol that HPC users use most of the time, but which is less useful in the enterprise. Performance optimizations and tuning across the whole IB stack: From Switches, Host Channel Adapters (HCAs) and drivers to low-level protocols, middleware and applications. Yes, even the standard HPC IB stack could be improved in terms of performance. Ethernet over IB (EoIB): Exalogic uses InfiniBand internally to reach high performance, but it needs to play nicely with datacenters around it. That's why Oracle added Ethernet over InfiniBand technology to it that allows for creating many virtual 10GBE adapters inside Exalogic's nodes that are aggregated and connected to Exalogic's IB gateway switches. While this is an open standard, it's up to the vendor to implement it. In this case, Oracle integrated the EoIB stack with Oracle's own IB to 10GBE gateway switches, and made it fully virtualized from the beginning. This means that Exalogic customers can completely rewire their server infrastructure inside the rack without having to physically pull or plug a single cable - a must-have for every cloud deployment. Anybody who wants to match this level of integration would need to add an InfiniBand switch development team to their project. Or just buy Oracle's gateway switches, which are conveniently shipped with a whole server infrastructure attached! IPv6 support for InfiniBand's Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), TCP/IP over IB (IPoIB) and EoIB protocols. Because no IPv6 = not very enterprise-class. HA capability for SDP. High Availability is not a big requirement for HPC, but for enterprise-class application servers it is. Every node in Exalogic's InfiniBand network is connected twice for redundancy. If any cable or port or HCA fails, there's always a replacement link ready to take over. This requires extra magic at the protocol level to work. So in addition to Weblogic's failover capabilities, Oracle implemented IB automatic path migration at the SDP level to avoid unnecessary failover operations at the middleware level. Security, for example spoof-protection. Another feature that is less important for traditional users of InfiniBand, but very important for enterprise customers. InfiniBand Partitioning and Quality-of-Service (QoS): One of the first questions we get from customers about Exalogic is: “How can we implement multi-tenancy?” The answer is to partition your IB network, which effectively creates many networks that work independently and that are protected at the lowest networking layer possible. In addition to that, QoS allows administrators to prioritize traffic flow in multi-tenancy environments so they can keep their service levels where it matters most. Resilient IB Fabric Management: InfiniBand is a self-managing network, so a lot of the magic lies in coming up with the right topology and in teaching the subnet manager how to properly discover and manage the network. Oracle's Infiniband switches come with pre-integrated, highly available fabric management with seamless integration into Oracle Enterprise Manager Ops Center. In short: Oracle elevated the OFED InfiniBand stack into an enterprise-class networking infrastructure. Many years and multiple teams of manpower went into the above improvements - this is something you can only get from Oracle, because no other InfiniBand vendor can give you these features across the whole stack! Exabus: Because it's not About the Size of Your Network, it's How You Use it! So let's assume that you somehow were able to get your hands on an enterprise-class IB driver stack. Or maybe you don't care and are just happy with the standard OFED one? Anyway, the next step is to actually leverage that InfiniBand performance. Here are the choices: Use traditional TCP/IP on top of the InfiniBand stack, Develop your own integration between your middleware and the lower-level (but faster) InfiniBand protocols. While more bandwidth is always a good thing, it's actually the low latency that enables superior performance for your applications when running on any networking infrastructure: The lower the latency, the faster the response travels through the network and the more transactions you can close per second. The reason why InfiniBand is such a low latency technology is that it gets rid of most if not all of your traditional networking protocol stack: Data is literally beamed from one region of RAM in one server into another region of RAM in another server with no kernel/drivers/UDP/TCP or other networking stack overhead involved! Which makes option 1 a no-go: Adding TCP/IP on top of InfiniBand is like adding training wheels to your racing bike. It may be ok in the beginning and for development, but it's not quite the performance IB was meant to deliver. Which only leaves option 2: Integrating your middleware with fast, low-level InfiniBand protocols. And this is what Exalogic's "Exabus" technology is all about. Here are a few Exabus features that help applications leverage the performance of InfiniBand in Exalogic: RDMA and SDP integration at the JDBC driver level (SDP), for Oracle Weblogic (SDP), Oracle Coherence (RDMA), Oracle Tuxedo (RDMA) and the new Oracle Traffic Director (RDMA) on Exalogic. Using these protocols, middleware can communicate a lot faster with each other and the Oracle database than by using standard networking protocols, Seamless Integration of Ethernet over InfiniBand from Exalogic's Gateway switches into the OS, Oracle Weblogic optimizations for handling massive amounts of parallel transactions. Because if you have an 8-lane Autobahn, you also need to improve your ramps so you can feed it with many cars in parallel. Integration of Weblogic with Oracle Exadata for faster performance, optimized session management and failover. As you see, “Exabus” is Oracle's word for describing all the InfiniBand enhancements Oracle put into Exalogic: OFED stack enhancements, protocols for faster IB access, and InfiniBand support and optimizations at the virtualization and middleware level. All working together to deliver the full potential of InfiniBand performance. Who else has 100% control over their middleware so they can develop their own low-level protocol integration with InfiniBand? Even if you take an open source approach, you're looking at years of development work to create, test and support a whole new networking technology in your middleware! The Extras: Less Hassle, More Productivity, Faster Time to Market And then there are the other advantages of Engineered Systems that are true for Exalogic the same as they are for every other Engineered System: One simple purchasing process: No headaches due to endless RFPs and no “Will X work with Y?” uncertainties. Everything has been engineered together: All kinds of bugs and problems have been already fixed at the design level that would have only manifested themselves after you have built the system from scratch. Everything is built, tested and integrated at the factory level . Less integration pain for you, faster time to market. Every Exalogic machine world-wide is identical to Oracle's own machines in the lab: Instant replication of any problems you may encounter, faster time to resolution. Simplified patching, management and operations. One throat to choke: Imagine finger-pointing hell for systems that have been put together using several different vendors. Oracle's Engineered Systems have a single phone number that customers can call to get their problems solved. For more business-centric values, read The Business Value of Engineered Systems. Conclusion: Buy Exalogic, or get ready for a 6-12 Month Science Project And here's the reason why it's not easy to "build your own Exalogic": There's a lot of work required to make such a system fly. In fact, anybody who is starting to "just put together a bunch of servers and an InfiniBand network" is really looking at a 6-12 month science project. And the outcome is likely to not be very enterprise-class. And it won't have Exalogic's performance either. Because building an Engineered System is literally rocket science: It takes a lot of time, effort, resources and many iterations of design/test/analyze/fix to build such a system. That's why InfiniBand has been reserved for HPC scientists for such a long time. And only Oracle can bring the power of InfiniBand in an enterprise-class, ready-to use, pre-integrated version to customers, without the develop/integrate/support pain. For more details, check the new Exalogic overview white paper which was updated only recently. P.S.: Thanks to my colleagues Ola, Paul, Don and Andy for helping me put together this article! var flattr_uid = '26528'; var flattr_tle = 'How to Avoid Your Next 12-Month Science Project'; var flattr_dsc = 'While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack.After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades?On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea.Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company.'; var flattr_tag = 'Engineered Systems,Engineered Systems,Infiniband,Integration,latency,Oracle,performance'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2012/04/how-avoid-your-next-12-month-science-project'; var flattr_lng = 'en_GB'

    Read the article

  • CodePlex Daily Summary for Tuesday, October 15, 2013

    CodePlex Daily Summary for Tuesday, October 15, 2013Popular ReleasesFFXIV Crafting Simulator: Crafting Simulator 2.4.1: -Fixed the offset for the new patch (Auto Loading function)iBoxDB.EX - Fast Transactional NoSQL Database Resources: iBoxDB.net fast transactional nosql database 1.5.2: Easily process objects and documents, zero configuration. fast embeddable transactional nosql document database, includes CURD, QueryLanguage, Master-Master-Slave Replication, MVCC, etc. supports .net2, .net4, windows phone, mono, unity3d, node.js , copy and run. http://download-codeplex.sec.s-msft.com/Download?ProjectName=iboxdb&DownloadId=737783 Benchmark with MongoDB Compatibility more platforms for java versionneurogoody: slicebox: this is the slice box jsEvent-Based Components AppBuilder: AB3.AppDesigner.55: Iteration 55 (Feature): Moving of TargetEdge (simple wires only) by mouse.Sandcastle Help File Builder: SHFB v1.9.8.0 with Visual Studio Package: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This new release contains bug fixes and feature enhancements. There are some potential breaking changes in this release as some features of the Help File Builder have been moved into...SharpConfig: SharpConfig 1.2: Implemented comment parsing. Comments are now part of settings and setting categories. New properties: Setting: Comment PreComments SettingCategory: Comment PreCommentsC++ REST SDK (codename "Casablanca"): C++ REST SDK 1.3.0: This release fixes multiple customer reported issues as well as the following: Full support for Dev12 binaries and project files Full support for Windows XP New sample highlighting the Client and Server APIs : BlackJack Expose underlying native handle to set custom options on http_client Improvements to Listener Library Note: Dev10 binaries have been dropped as of this release, however the Dev10 project files are still available in the Source CodeAD ACL Scanner: 1.3.2: Minor bug fixed: Powershell 4.0 will report: Select—Object: Parameter cannot be processed because the parameter name p is ambiguous.Json.NET: Json.NET 5.0 Release 7: New feature - Added support for Immutable Collections New feature - Added WriteData and ReadData settings to DataExtensionAttribute New feature - Added reference and type name handling support to extension data New feature - Added default value and required support to constructor deserialization Change - Extension data is now written when serializing Fix - Added missing casts to JToken Fix - Fixed parsing large floating point numbers Fix - Fixed not parsing some ISO date ...RESX Manager: ResxManager 0.2.1: FIXED: Many critical bugs have been fixed. New Features Error logging for improved exception handling New toolbar Improvements of user interfaceFast YouTube Downloader: YouTube Downloader 2.2.0: YouTube Downloader 2.2.0VidCoder: 1.5.8 Beta: Added hardware acceleration options: Bicubic OpenCL scaling algorithm, QSV decoding/encoding and DXVA decoding. Updated HandBrake core to SVN 5834. Updated VidCoder setup icon. Fixed crash when choosing the mp4v2 container on x86 and opening on x64. Warning: the hardware acceleration features require specific hardware or file types to work correctly: QSV: Need an Intel processor that supports Quick Sync Video encoding, with a monitor hooked up to the Intel HD Graphics output and the lat...ASP.net MVC Awesome - jQuery Ajax Helpers: 3.5.2: version 3.5.2 - fix for setting single value to multivalue controls - datepicker min max date offset fix - html encoding for keys fix - enable Column.ClientFormatFunc to be a function call that will return a function version 3.5.1 - fixed html attributes rendering - fixed loading animation rendering - css improvements version 3.5 ========================== - autosize for all popups ( can be turned off by calling in js awe.autoSize = false ) - added Parent, Paremeter extensions ...Wsus Package Publisher: Release v1.3.1310.12: Allow the Update Creation Wizard to be set in full screen mode. Fix a bug which prevent WPP to Reset Remote Sus Client ID. Change the behavior of links in the Update Detail Viewer. Left-Click to open, Right-Click to copy to the Clipboard.TerrariViewer: TerrariViewer v7 [Terraria Inventory Editor]: This is a complete overhaul but has the same core style. I hope you enjoy it. This version is compatible with 1.2.0.3 Please send issues to my Twitter or https://github.com/TJChap2840WDTVHubGen - Adds Metadata, thumbnails and subtitles to WDTV Live Hubs: WDTVHubGen.v2.1.6.maint: I think this covers all of the issues. new additions: fixed the thumbnail problem for backgrounds. general clean up and error checking. need to get this put through the wringer and all feedback is welcome.BIDS Helper: BIDS Helper 1.6.4: This BIDS Helper release brings the following new features and fixes: New Features: A new Bus Matrix style report option when you run the Printer Friendly Dimension Usage report for an SSAS cube. The Biml engine is now fully in sync with the supported subset of Varigence Mist 3.4. This includes a large number of language enhancements, bugfixes, and project deployment support. Fixed Issues: Fixed Biml execution for project connections fixing a bug with Tabular Translations Editor not a...MoreTerra (Terraria World Viewer): MoreTerra 1.11.3: =========== =New Features= =========== New Markers added for Plantera's Bulb, Heart Fruits and Gold Cache. Markers now correctly display for the gems found in rock debris on the floor. =========== =Compatibility= =========== Fixed header changes found in Terraria 1.0.3.1Media Companion: Media Companion MC3.581b: Fix in place for TVDB xml issue. New* Movie - General Preferences, allow saving of ignored 'The' or 'A' to end of movie title, stored in sorttitle field. * Movie - New Way for Cropping Posters. Fixed* Movie - Rename of folders/filename. caught error message. * Movie - Fixed Bug in Save Cropped image, only saving in Pre-Frodo format if Both model selected. * Movie - Fixed Cropped image didn't take zoomed ratio into effect. * Movie - Separated Folder Renaming and File Renaming fuctions durin...SmartStore.NET - Free ASP.NET MVC Ecommerce Shopping Cart Solution: SmartStore.NET 1.2.0: HighlightsMulti-store support "Trusted Shops" plugins Highly improved SmartStore.biz Importer plugin Add custom HTML content to pages Performance optimization New FeaturesMulti-store-support: now multiple stores can be managed within a single application instance (e.g. for building different catalogs, brands, landing pages etc.) Added 3 new Trusted Shops plugins: Seal, Buyer Protection, Store Reviews Added Display as HTML Widget to CMS Topics (store owner now can add arbitrary HT...New ProjectsArtezio SharePoint 2013 Workflow Activities: SharePoint Workflow 2013 doesn’t provide activities to work with permissions, we've fixed it using HttpSend activity that makes REST API calls.Dependency.Injection: An attempt to write a really simple dependency injection framework. Does property-based and recursive dependency injection. Handles singletons. Yay!DHGMS SUO Killer: SUO Killer is a Visual Studio extension to deal with the removal of SUO file to mitigate SUO related issues in Visual Studio. This project is written in C#.dynamicsheet: dynamicsheetExcel Comparator: Excel Comparator is an add-in for Microsoft Excel that allows the user to compare a range between two sheets. FetchAIP: FetchAIP is a utility to download the various sections of the Aeronautical Information Publication (AIP) for New Zealand.Fluent Method and Type Builder: Still working on the summary.getboost: NuGet package for Boost framework.Goldstone Forum: WebForms Forum - TelerikAcademy Team ProjectGroupMe Software Development Kit: .NET Software Development Kit for http://groupme.com/ chat service.GSLMS: ----Import Excel Files Into SQL Server: Load Excel files into SQL Database without schema changes.Inaction: ?????????? jBegin: Learning ASP.net MVC from beginning, then here will be the source code for jbegin.comKDG's Statistical Quality Control Solver: This tool will include methods that can solve sample standard deviation, sample variance, median, mode, moving average, percentiles, margin of error, etc.kpi: Key Performance Indicator (KPI)????; visual studio 2010 with .NET 4.0 runtimeLECO Remote Control Client Application: Sample code and binaries are provided to demonstrate the remote control capability of a LECO Cornerstone instrument.LinkPad: My first Windows Store app intended for student to sketch up thoughts and concepts in quick diagrams.Modler.NET - Automating Graphical Data Model Co-Evolution: Modler.NET was the tool created for a Master's thesis project, which automates the co-evolution of graphical data models and the database that they represent.MyFileManager1: SummaryNever Lotto: Korean 465 Lotto Analyzer and Simulator. The real purpose of this project is to show that this kind of lotto things are just shit.NHibernate: The purpose of this project is to demo CRUD operations using NHibernate with Mono in Visual Studio 2012 using C# language. OAuth2 Authorizer: OAuth2 Authorizer helps you get the access code for a standard OAuth2 REST service that implements 3-legged authentication.Regular Expression for Excel: Regular Expression For Excel is an Excel Plugin. It provides a regular expressions EXCEL support. We can use it in the EXCEL function.Service Tester: Service Tester is an Azure Cloud based load testing application targeted at Soap Web Services which allows you to invoke your Web Service by random parameters.Simple TypeScript and C# Class Generator: Simple GUI application to generate compatible class source code for C# and TypeScript for communications between C# and TypeScript. Soccer team management: ---Spanner: No more stringly-typed web development! Build statically typed single page web applications in C#, automatically generating all HTML, JavaScript, and Knockout.

    Read the article

  • ADDS: 1 - Introducing and designing

    - by marc dekeyser
    Normal 0 false false false EN-GB X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} What is ADDS?  Every Microsoft oriented infrastructure in today's enterprises will depend largely on the active directory version built by Microsoft. It is the foundation stone on which all other products (Exchange, update services, office communicator, the system center family, etc) rely on to get their information. And that is just looking at it from an infrastructure perspective. A well designed and implemented Active Directory implementation makes life for IT personnel and user alike a lot easier. Centralised management and the abilities opened up  by having it in place are ample.  But what is Active Directory Domain Services? We can look at ADDS as a centralised directory containing all objects your infrastructure runs on in one way or another. Since it is a Microsoft product you'll obviously not be seeing linux or mac clients listed in here (exceptions exist) but in general we can say it contains everything your company has in place in one form or another.  The domain name services. The domain naming service (or DNS for short) is a service which translates IP address (the identifiers for each computer in your domain) into readable and easy to understand names. This service is a prequisite for ADDA to work and having wrong record in a DNS server will make any ADDS service fail. Generally speaking a DNS service will be run on the same server as the ADDS service but it is worth wile to remember that this is not necessary. You could, for example, run your DNS services on a linux box (which would need special preparing to host an ADDS integrated DNS zone) and run the ADDS service of another box… Where to start? If the aim is to put in place a first time implementation of ADDS in your enterprise there are plenty of things to consider depending on what you are going to do in the long run. Great care has to be taken when first designing and implementing as having it set up wrong will cause a headache down the line. It is for that reason that I like to start building from the bottom up and start with a generic installation of ADDS (which will still differ for every client) and make it adaptable for future services which can hook in to the existing environment. Adapting existing environments is out of scope for this document (and series) although it is possible to take the pointers and change your existing environment to run in a smoother manor. Take great care when changing things as one small slip of the hand can give you a forest wide failure… Whenever starting with an ADDS deployment I ask the client the following questions:  What are your long term plans and goals?  How flexible do you want it? Are you currently linux heavy and want to keep this or can we go for an all Microsoft design? Those three questions should give some sort of indicator what direction can be taken and if the client has thought about some things themselves :).  The technical side of things  What is next to consider is what kind of infrastructure is already in place. For these series I'll keep it simple and introduce some general concepts without going in to depth on integrating ADDS with other DNS services.  Building from the ground up means we need to consider our layers on which our infrastructure will rely. In my view that goes as follows:  Network (WAN/LAN links and physical sites DNS Namespacing All in one domain or split up in different domains/forests? Security (both for ADDS and physical sites) The network side of things  Looking at how the network is currently set up can potentially teach us a large deal about the client. Do they have multiple physical site? What network speeds exist between these sites, etc… Depending on this information we will design our site links (which controls replication) in future stages. DNS Namespacing Maybe the single most intresting thing to know is what the domain will be named (ADDS will need a DNS domain with the same name) and where this will be hosted. Note that active directory can be set up with a singe name (aka contoso instead of contoso.com) but it is highly recommended to never do this. If you do end up with a domain like that for some reason there will be a lot of services that are going to give you good grief in the future (exchange being one of them). So one of the best practises would be always to use a double name (contoso.com or contoso.lan for example). Internal namespace A single namespace is just what it sounds like. You have a DNS domain which is different internally from what the client has as an external namespace. f.e. contoso.com as an external name (out on the internet) and contoso.lan on the internal network. his setup is has its advantages in that you have more obscurity from the internet in the DNS side of this but it will require additional work to publish services to the web. External namespace Quite like the internal namespace only here you do not differ the internal namespace of the company from what is known on the internet. In this implementation you would host your own DNS servers for the external domain inside the network. Or in other words, any external computer doing a DNS lookup would contact your internal DNS server for the resolution. Generally speaking this set up is a bad idea from the security side of things. Split DNS Whilst using an external namespace design is fairly easy it involves a lot of security risks. Opening up you ADDS DSN servers for lookups exposes your entire network to the internet and should be avoided at any cost. And that is where the "split DNS" design comes in. In this setup up would still have the same namespace internally and externally but you would be using different DNS servers for lookups on the external network who have no records of your internal resources unless you explicitly publish them. All in one or not? In determining your active directory design you can look at the following possibilities:  Single forest, Single domain Single forest, multiple domains Multiple forests, multiple domains I've listed the possibilities for design in increasing order of administrative magnitude. Microsoft recommends trying to use a single forest, single domain in as much situations as possible. It is, however, always possible that you require your services to be seperated from your users in a resource forest with trusts set up between the different forests. To start out I would go with the single forest design to avoid complexity unless there are strict requirements to have multiple forests. Security What kind of security is required on the domain and does this reflect the physical security on the sites? Not every client can afford to have a domain controller in a secluded server room on every site and it is exactly for that reason that Microsoft introduced the RODC (read only domain controller). A RODC is a domain controller that has been limited in functionality, in essence it will only cache the data you explicitly tell it to cache and in the case of a DC compromise (it being stolen) only a limited number of accounts will need to be affected. Th- Th- Th- That’s all folks! Well at least for now! In future editions of this series we’ll be walking through the different task that need to be done and the thought which needs to be put in to it. But for all editions we’ll be going from the concept of running a single forest, single domain with a split DNS setup… See you next time!

    Read the article

< Previous Page | 357 358 359 360 361 362 363 364 365 366 367 368  | Next Page >