Search Results

Search found 4846 results on 194 pages for 'rpg master'.

Page 51/194 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Configuring Samba to allow Use of CUPS printer

    - by Skizz
    Having trouble with samba printing. I have a CUPS printer installed on an Ubuntu 11.04 server and that works great. When I try to configure samba to allow an XP machine to use the printer, it fails when printing. I can install the printer drivers for XP from the server and the printer appears in the XP printer control panels. When I try to print a test page from the XP machine I get this error in the system event log: Jun 27 20:33:29 FatController smbd[3571]: [2012/06/27 20:33:29, 0] rpc_server/srv_netlog_nt.c:603(_netr_ServerAuthenticate3) Jun 27 20:33:29 FatController smbd[3571]: _netr_ServerAuthenticate3: netlogon_creds_server_check failed. Rejecting auth request from client JAMES machine account JAMES$ Here's my smb.conf file: [global] server string = %h (Server) workgroup = SODOR encrypt passwords = true security = user os level = 255 preferred master = yes domain master = yes local master = yes logon path = \\%L\profile\%U logon drive = S: logon home = \\%L\home\%U domain logons = yes map to guest = Never guest ok = no dns proxy = no time server = yes logon script = logon.bat load printers = yes printing = cups printcap name = cups nt acl support = no interfaces = eth1 lo bind interfaces only = yes smb ports = 445 [netlogon] comment = Net Log On path = /home/samba/netlogon guest ok = no read only = yes browseable = no [profile] comment = User Profiles path = /home/samba/profiles read only = no create mask = 0600 directory mask = 0700 browseable = no store dos attributes = yes [printers] comment = All Printers path = /var/spool/samba browseable = yes guest ok = no printable = yes [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes guest ok = no read only = yes write list = root, skizz Anyone know what the problem is and how to fix it? In addition to the above, I also get this error: Jun 27 21:56:35 FatController smbd[3571]: [2012/06/27 21:56:35, 0] printing/print_cups.c:1027(cups_job_submit) Jun 27 21:56:35 FatController smbd[3571]: Unable to print file to `Edward' - client-error-not-authorized which I think is more relevant.

    Read the article

  • Move SQL Server transaction log to another disk

    - by Jim Lahman
    When restoring a database backup, by default, SQL Server places the database files in the master database file directory.  In this example, that location is in L:\MSSQL10.CHTL\MSSQL\DATA as shown by the issuance of sp_helpfile   Hence, the restored files for the database CHTL_L2_DB are in the same directory     Per SQL Server best practices, the log file should be on its own disk drive so that the database and log file can operate in a sequential manner and perform optimally. The steps to move the log file is as follows: Record the location of the database files and the transaction log files Note the future destination of the transaction log file Get exclusive access to the database Detach from the database Move the log file to the new location Attach to the database Verify new location of transaction log Record the location of the database file To view the current location of the database files, use the system stored procedure, sp_helpfile 1: use chtl_l2_db 2: go 3:   4: sp_helpfile 5: go   Note the future destination of the transaction log file The future destination of the transaction log file will be located in K:\MSSQLLog   Get exclusive access to the database To get exclusive access to the database, alter the database access to single_user.  If users are still connected to the database, remove them by using with rollback immediate option.  Note:  If you had a pane connected to the database when the it is placed into single_user mode, then you will be presented with a reconnection dialog box. 1: alter database chtl_l2_db 2: set single_user with rollback immediate 3: go Detach from the database   Now detach from the database so that we can use windows explorer to move the transaction log file 1: use master 2: go 3:   4: sp_detach_db 'chtl_l2_db' 5: go   After copying the transaction log file re-attach to the database 1: use master 2: go 3:   4: sp_attach_db 'chtl_l2_db', 5: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB.MDF', 6: 'K:\MSSQLLog\CHTL_L2_DB_4.LDF', 7: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_1.NDF', 8: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_2.NDF', 9: 'L:\MSSQL10.CHTL\MSSQL\DATA\CHTL_L2_DB_3.NDF' 10: GO

    Read the article

  • Chrome refused to execute this JavaScript file

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says: Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plain text file? It clearly has a .js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still got the same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run either. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML code, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get Chrome to recognize that the linked file is actually a JavaScript type?

    Read the article

  • Should I, and how do I incorporate microdata into my asp.net website with 47 pages?

    - by Jason Weber
    I have an asp.net (vb) with 47 pages. The problem is that it's in 10 different languages, although 98% just use English. I have 5 master pages. I've read Google Webmaster Tools, but I'm still confounded. I'm reading about how microdata is the way to go. Does this mean I should put itemtype and itemprop span and div tags in my master pages, or should I do all of my 47 pages (.resx resource files) separately? The main key phrase I want throughout search results is "machine vision". For instance, the first couple sentences on my "about.aspx" page are: <span itemprop="name">USS Vision Inc.</span> (USS) is a privately-owned company with headquarters in <span itemprop="locality">Detroit, Michigan, USA</span>. We design, engineer, produce, and integrate special machine vision error-proofing products and <a href="http://www.ussvision.com/services/" target="_self" itemprop="url">services</a> that create lean factories by improving the quality of manufactured products, and by significantly reducing manufacturing costs through advanced automation. Am I doing this right, or how would I do this if I'm not? Should I use the itemprop="url" or other rich snippets for every link in my website? I mean, do I need to add an itemprop to just about everything, or can I just alter my master pages? Any guidance in this regard to help improve my SEO and SERPS would be greatly appreciated!

    Read the article

  • What are some best practices for minimizing code?

    - by CrystalBlue
    While maintaining the sites our development team has created, we have come across include files and plugins that have proven to be very useful to more then one part of our applications. Most of these modules have come with two different files, a normal source file and a min file. Seeing that the performance and speed of a page can be increased by minimizing the size of the file, we're looking into doing that to our pages as well. The problem that we run into is a lot of our normal pages (written in ASP classic) is a mix of HTML, ASP, Javascript, CSS, and include files. We have some pages that have their JS both in include files and in the page, depending on if the function is only really used in that page or if it's used in many other pages. For example, we have a common.js and an ajax.js file, both are used in a lot of pages, but not all of them. As well as having some functions in a page that doesn't really make sense to put into one master page. What I have seen a few other people do online is use one master JS file and place all of their javascript into that, minify it, gzip it, and only use that on their production server. Again, this would be great, but I don't know if that fully works for our purposes. What I'm looking for is some direction to go with on this. I'm in favor of taking all of our JS and putting it in one include file, and just having it included in every page that is hit. However, not every page we have needs every bit of JS. So would it be worth the compilation and minifying of the files into one master file and include it everywhere, or would it be better to minify all other files and still include them on a need-to-use basis?

    Read the article

  • Query for server DefaultData & DefaultLog folders

    - by jamiet
    Do you ever need to query for the DefaultData & DefaultLog folders for your SQL Server instance? Well, I just did and the following script enabled me to do that: DECLARE @HkeyLocal NVARCHAR(18),@MSSqlServerRegPath NVARCHAR(31),@InstanceRegPath SYSNAME; SELECT @HkeyLocal=N'HKEY_LOCAL_MACHINE' SELECT @MSSqlServerRegPath=N'SOFTWARE\Microsoft\MSSQLServer' SELECT @InstanceRegPath=@MSSqlServerRegPath + N'\MSSQLServer' DECLARE @SmoDefaultFile NVARCHAR(512) EXEC MASTER.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultData', @SmoDefaultFile OUTPUT DECLARE @SmoDefaultLog NVARCHAR(512) EXEC MASTER.dbo.xp_instance_regread @HkeyLocal, @InstanceRegPath, N'DefaultLog', @SmoDefaultLog OUTPUT SELECT ISNULL(@SmoDefaultFile,N'') AS [DefaultFile],ISNULL(@SmoDefaultLog,N'') AS [DefaultLog]' I haven’t done any rigorous testing or anything like that, all I can say is…it worked for me (on SQL Server 2012). Use as you see fit. Doubtless this information exists in a multitude of other places but nevertheless I’m putting it here so I know where to find it in the future. Just for fun I thought I’d try this out against SQL Azure Windows Azure SQL Database. Unsurprisingly it didn’t work there: Msg 40515, Level 15, State 1, Line 16 Reference to database and/or server name in 'MASTER.dbo.xp_instance_regread' is not supported in this version of SQL Server. @Jamiet

    Read the article

  • Extended service set help [closed]

    - by Cygnus X
    Sorry, this is going to be long and rambly. So, at home my network is set up as such: I have a "master router" that handles DHCP, from there I have a basic 5 port switch, and coming off that, I have a second router that I put in access point mode so that I could have better wireless coverage in my house. I cant seem to access it through a web browser since the master router kicked in and assigned it an ip address. But, i told you all that story, so that i could tell you this story. At the company I work for, they are trying to set up wireless cameras, according to the manual for these, in order to connect these cameras to our network, we need to get into the wireless router and associate the router with the cameras, but the problem is, the network is setup like my home network. Theres a "master router" which is a windows 2003 server, and then a wireless router in AP mode dishing out wireless access. I cant get into the router (by the way I'm not the system admin for the company, nor do i know jack squat about windows server , nor can I easily get in contact with the system admin. (quite the cluster f*k, eh?)) I've tried using wireshark to find the routers ip, dug through arp tables, tried plugging straight into the router, but I cant seem to get into the damn thing. Any ideas/suggestions would be greatly appreciated.

    Read the article

  • Platinum Club??????(Oracle Solaris/MySQL) ????

    - by Urakawa
    ORACLE MASTER Platinum???????????Platinum Club????????2011?3?4???????????????? ???????Oracle Solaris 11 Express?????????????????????????????????????MySQL?Performance Tuning??????????????????   ????????????? ??????????????? ????? ?? ?????????????? ???????????????????????????ORACLE MASTER Platinum???????????????????????????????Oracle Solaris 11????????????????????????????MySQL???????????????????????????????????????????????????   ?What's New in Solaris 11 Express???????????????????? ??????????? ????????????? ?? ?? 2011?4????????What's New in Solaris 11 Express??????????3???????????????????????? ????? ???????? ?? ????????????????????????????????????????????????Oracle Solaris 11 ????????????????? Express ?????????????????????????????????????(Crossbow)??Solaris10??????????????? ??(ZFS)?OS?????(????)??????????????????????? Oracle Solaris 11????????????????????? ????8??????Oracle Solaris????????????????????????????????????????OS???????????????????????????????????????????????????   ?MySQL Performance Tuning??????????????????? IT???????????? ????????????1????? ? ???????MySQL Performance Tuning???????????????????????1??????????????????????????????????????????? ????????????Oracle Database???MySQL???????????????MySQL Performance Tuning????????????MySQL????????????????MySQL???????????????????????????????????????????????????? MySQL?????????????????????Oracle Database????????????ORACLE MASER Platinum????????????????????????????????????????????????????2?????????????????????   ????????????????????Platinum Club???????????????????????????????????????????????????????????????? ???Oracle Database??????????????????????????????????Oracle Database????????????ORACLE MASTER Platinum????????????????????????Oracle Solaris????????????????????????MySQL???????????????????????????????????????????????????????????????????????TV????????????????????????????????????????????????????????????Platinum Club??????????????????????????????????????????????????

    Read the article

  • Creating a variable list Pashua, OS X & Bash.

    - by S1syphus
    First of all, for those that don't know pashua is a tool for creating native Aqua dialog windows. An example of what a window config looks like is # pashua_run() # Define what the dialog should be like # Take a look at Pashua's Readme file for more info on the syntax conf=" # Set transparency: 0 is transparent, 1 is opaque *.transparency=0.95 # Set window title *.title = Introducing Pashua # Introductory text tb.type = text tb.default = "HELLO WORLD" tb.height = 276 tb.width = 310 tb.x = 340 tb.y = 44 if [ -e "$icon" ] then # Display Pashua's icon conf="$conf img.type = image img.x = 530 img.y = 255 img.path = $icon" fi if [ -e "$bgimg" ] then # Display background image conf="$conf bg.type = image bg.x = 30 bg.y = 2 bg.path = $bgimg" fi pashua_run "$conf" echo " tb = $tb" The problem is, Pashua can't really get output from stdout, but it can get arguments. Following on from what Dennis Williamson posted here. What ideally it should do is generate an output file based on information from a text file, To executed in pashua_run ore add the pashua_run around the window argument: count=1 while read -r i do echo "AB${count}.type = openbrowser" echo "AB${count}.label = Choose a master playlist file" echo "AB${count}.width=310" echo "AB${count}.tooltip = Blabla filesystem browser" echo "some text with a line from the file: $i" (( count++ )) done < TEST.txt >> long.txt SO the output is AB1.type = openbrowser AB1.label = Choose a master playlist file AB1.width=310 AB1.tooltip = Blabla filesystem browser some text with a line from the file: foo AB2.type = openbrowser AB2.label = Choose a master playlist file AB2.width=310 AB2.tooltip = Blabla filesystem browser some text with a line from the file: bar AB3.type = openbrowser AB3.label = Choose a master playlist file AB3.width=310 AB3.tooltip = Blabla filesystem browser some text with a line from the file: dev AB4.type = openbrowser AB4.label = Choose a master playlist file AB4.width=310 AB4.tooltip = Blabla filesystem random So if there is a clever way to get the output of that and place it into pashua run would be cool, on the fly: I.E load te contents of TEST.txt and generate the place it into pashua_run, I've tried using cat and opening the file... but because it's in Pashua_run it doesn't work, is there a smart way to break out then back in? Or the second way which I was thinking, was create get the output then append it into the middle text file containing the pashua runtime, then execute it, maybe slightly hacky, but I would imagine it will do the job. Any ideas? ++ I know I probably could make my life a lot easier, by doing this in actionscript and cocoa, although at present don't have time for such a learning curve, although I do plan to get round to it.

    Read the article

  • github like workflow on private server over ssh

    - by Jesse
    I have an server (available via ssh) on the internet that my friend and I use for working on projects together. We have started using git for source control. Our setup currently is as follows: Friend created repository on server with git init named project.friend.git I cloned project.friend.git on server to project.jesse.git I then cloned project.jesse.git on server to my local machine using git clone jesse@server:/git_repos/project.jesse.git I work on my local machine and commit to the local machine. When I want to push my changes to the project.jesse.git on server I use git push origin master. My friend is working on project.friend.git. When I want to get his changes I do pull jesse@server:/git_repos/project.friend.git. Everything seems to be working fine, however, I am now getting the following error when I do git push origin master: localpc:project.jesse jesse$ git push origin master Counting objects: 100, done. Delta compression using up to 2 threads. Compressing objects: 100% (76/76), done. Writing objects: 100% (76/76), 15.98 KiB, done. Total 76 (delta 50), reused 0 (delta 0) warning: updating the current branch warning: Updating the currently checked out branch may cause confusion, warning: as the index and work tree do not reflect changes that are in HEAD. warning: As a result, you may see the changes you just pushed into it warning: reverted when you run 'git diff' over there, and you may want warning: to run 'git reset --hard' before starting to work to recover. warning: warning: You can set 'receive.denyCurrentBranch' configuration variable to warning: 'refuse' in the remote repository to forbid pushing into its warning: current branch. warning: To allow pushing into the current branch, you can set it to 'ignore'; warning: but this is not recommended unless you arranged to update its work warning: tree to match what you pushed in some other way. warning: warning: To squelch this message, you can set it to 'warn'. warning: warning: Note that the default will change in a future version of git warning: to refuse updating the current branch unless you have the warning: configuration variable set to either 'ignore' or 'warn'. To jesse@server:/git_repos/project.jesse.git c455cb7..e9ec677 master -> master Is this warning anything I need to be worried about? Like I said, everything seems to be working. My friend is able to pull my changes in from my branch. I have the clone on the server so he can access it since he does not have access to my local machine. Is there something that could be done better? Thanks!

    Read the article

  • Adding objects to an NSMutableArray, order seems odd when parsing from an XML file

    - by diatrevolo
    Hello: I am parsing an XML file for two elements: "title" and "noType". Once these are parsed, I am adding them to an object called aMaster, an instance of my own Master class that contains NSString variables. I am then adding these instances to an NSMutableArray on a singleton, in order to call them elsewhere in the program. The problem is that when I call them, they don't seem to be on the same NSMutableArray index... each index contains either the title OR the noType element, when it should be both... can anyone see what I may be doing wrong? Below is the code for the parser. Thanks so much!! #import "XMLParser.h" #import "Values.h" #import "Listing.h" #import "Master.h" @implementation XMLParser @synthesize sharedSingleton, aMaster; - (XMLParser *) initXMLParser { [super init]; sharedSingleton = [Values sharedValues]; aMaster = [[Master init] alloc]; return self; } - (void)parser:(NSXMLParser *)parser didStartElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qualifiedName attributes:(NSDictionary *)attributeDict { aMaster = [[Master alloc] init]; //Extract the attribute here. if ([elementName isEqualToString:@"intro"]) { aMaster.intro = [attributeDict objectForKey:@"enabled"]; } else if ([elementName isEqualToString:@"item"]) { aMaster.item_type = [attributeDict objectForKey:@"type"]; //NSLog(@"Did find item with type %@", [attributeDict objectForKey:@"type"]); //NSLog(@"Reading id value :%@", aMaster.item_type); } else { //NSLog(@"No known elements"); } //NSLog(@"Processing Element: %@", elementName); //HERE } - (void)parser:(NSXMLParser *)parser foundCharacters:(NSString *)string { if(!currentElementValue) currentElementValue = [[NSMutableString alloc] initWithString:string]; else { [currentElementValue appendString:string];//[tempString stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]]; CFStringTrimWhitespace((CFMutableStringRef)currentElementValue); } } - (void)parser:(NSXMLParser *)parser didEndElement:(NSString *)elementName namespaceURI:(NSString *)namespaceURI qualifiedName:(NSString *)qName { if ([elementName isEqualToString:@"item"]) { [sharedSingleton.master addObject:aMaster]; NSLog(@"Added %@ and %@ to the shared singleton", aMaster.title, aMaster.noType); //Only having one at a time added... don't know why [aMaster release]; aMaster = nil; } else if ([elementName isEqualToString:@"title"]) { [aMaster setValue:currentElementValue forKey:@"title"]; } else if ([elementName isEqualToString:@"noType"]) { [aMaster setValue:currentElementValue forKey:@"noType"]; //NSLog(@"%@ should load into the singleton", aMaster.noType); } NSLog(@"delimiter"); NSLog(@"%@ should load into the singleton", aMaster.title); NSLog(@"%@ should load into the singleton", aMaster.noType); [currentElementValue release]; currentElementValue = nil; } - (void) dealloc { [aMaster release]; [currentElementValue release]; [super dealloc]; } @end

    Read the article

  • SQL SERVER – 2012 – All Download Links in Single Page – SQL Server 2012

    - by pinaldave
    SQL Server 2012 RTM is just announced and recently I wrote about all the SQL Server 2012 Certification on single page. As a feedback, I received suggestions to have a single page where everything about SQL Server 2012 is listed. I will keep this page updated as new updates are announced. Microsoft SQL Server 2012 Evaluation Microsoft SQL Server 2012 enables a cloud-ready information platform that will help organizations unlock breakthrough insights across the organization. Microsoft SQL Server 2012 Express Microsoft SQL Server 2012 Express is a powerful and reliable free data management system that delivers a rich and reliable data store for lightweight Web Sites and desktop applications. Microsoft SQL Server 2012 Feature Pack The Microsoft SQL Server 2012 Feature Pack is a collection of stand-alone packages which provide additional value for Microsoft SQL Server 2012. Microsoft SQL Server 2012 Report Builder Report Builder provides a productive report-authoring environment for IT professionals and power users. It supports the full capabilities of SQL Server 2012 Reporting Services. Microsoft SQL Server 2012 Master Data Services Add-in For Microsoft Excel The Master Data Services Add-in for Excel gives multiple users the ability to update master data in a familiar tool without compromising the data’s integrity in Master Data Services. Microsoft SQL Server 2012 Performance Dashboard Reports The SQL Server 2012 Performance Dashboard Reports are Reporting Services report files designed to be used with the Custom Reports feature of SQL Server Management Studio. Microsoft SQL Server 2012 PowerPivot for Microsoft Excel® 2010 Microsoft PowerPivot for Microsoft Excel 2010 provides ground-breaking technology; fast manipulation of large data sets, streamlined integration of data, and the ability to effortlessly share your analysis through Microsoft SharePoint. Microsoft SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Technologies 2010 The SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint 2010 technologies allows you to integrate your reporting environment with the collaborative SharePoint 2010 experience. Microsoft SQL Server 2012 Semantic Language Statistics The Semantic Language Statistics Database is a required component for the Statistical Semantic Search feature in Microsoft SQL Server 2012 Semantic Language Statistics. Microsoft ®SQL Server 2012 FileStream Driver – Windows Logo Certification Catalog file for Microsoft SQL Server 2012 FileStream Driver that is certified for WindowsServer 2008 R2. It meets Microsoft standards for compatibility and recommended practices with the Windows Server 2008 R2 operating systems. Microsoft SQL Server StreamInsight 2.0 Microsoft StreamInsight is Microsoft’s Complex Event Processing technology to help businesses create event-driven applications and derive better insights by correlating event streams from multiple sources with near-zero latency. Microsoft JDBC Driver 4.0 for SQL Server Download the Microsoft JDBC Driver 4.0 for SQL Server, a Type 4 JDBC driver that provides database connectivity through the standard JDBC application program interfaces (APIs) available in Java Platform, Enterprise Edition 5 and 6. Data Quality Services Performance Best Practices Guide This guide focuses on a set of best practices for optimizing performance of Data Quality Services (DQS). Microsoft Drivers 3.0 for SQL Server for PHP The Microsoft Drivers 3.0 for SQL Server for PHP provide connectivity to Microsoft SQLServer from PHP applications. Product Documentation for Microsoft SQL Server 2012 for firewall and proxy restricted environments The Microsoft SQL Server 2012 setup installs only the Help Viewer…install any documentation. All of the SQL Server documentation is available online. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • multiple puppet masters

    - by Oli
    I would like to set up an additional puppet master but have the CA server handled by only 1 puppet master. I have set this up as per the documentation here: http://docs.puppetlabs.com/guides/scaling_multiple_masters.html I have configured my second puppet master as follows: [main] ... ca = false ca_server = puppet-master1.test.net I am using passenger so I am a bit confused how the virtual-host.conf file should look for my second puppet-master2.test.net. Here is mine (updated as per Shane Maddens answer): LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18 PassengerRuby /usr/bin/ruby Listen 8140 <VirtualHost *:8140> ProxyPassMatch ^/([^/]+/certificate.*)$ https://puppet-master1.test.net:8140/$1 SSLEngine on SSLProtocol -ALL +SSLv3 +TLSv1 SSLCipherSuite ALL:!ADH:RC4+RSA:+HIGH:+MEDIUM:-LOW:-SSLv2:-EXP SSLCertificateFile /var/lib/puppet/ssl/certs/puppet-master2.test.net.pem SSLCertificateKeyFile /var/lib/puppet/ssl/private_keys/puppet-master2.test.net.pem #SSLCertificateChainFile /var/lib/puppet/ssl/ca/ca_crt.pem #SSLCACertificateFile /var/lib/puppet/ssl/ca/ca_crt.pem # If Apache complains about invalid signatures on the CRL, you can try disabling # CRL checking by commenting the next line, but this is not recommended. #SSLCARevocationFile /var/lib/puppet/ssl/ca/ca_crl.pem SSLVerifyClient optional SSLVerifyDepth 1 # The `ExportCertData` option is needed for agent certificate expiration warnings SSLOptions +StdEnvVars +ExportCertData # This header needs to be set if using a loadbalancer or proxy RequestHeader unset X-Forwarded-For RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e DocumentRoot /etc/puppet/rack/public/ RackBaseURI / <Directory /etc/puppet/rack/> Options None AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have commented out the #SSLCertificateChainFile, #SSLCACertificateFile & #SSLCARevocationFile - this is not a CA server so not sure I need this. How would I get passenger to work with these? I would like to use ProxyPassMatch which I have configured as per the documentation. I don't want to specify a ca server in every puppet.conf file. I am getting this error when trying to get create a cert from a puppet client pointing to the second puppet master server (puppet-master2.test.net): [root@puppet-client2 ~]# puppet agent --test Error: Could not request certificate: Could not intern from s: nested asn1 error Exiting; failed to retrieve certificate and waitforcert is disabled On the puppet client I have this [main] server = puppet-master2.test.net What have I missed? -- update Here is a new virtual host file on my secondary puppet master. Is this correct? I have SSL turned off? LoadModule passenger_module /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/apache2/mod_passenger.so PassengerRoot /usr/lib/ruby/gems/1.8/gems/passenger-3.0.18 PassengerRuby /usr/bin/ruby # you probably want to tune these settings PassengerHighPerformance on PassengerMaxPoolSize 12 PassengerPoolIdleTime 1500 # PassengerMaxRequests 1000 PassengerStatThrottleRate 120 RackAutoDetect Off RailsAutoDetect Off Listen 8140 <VirtualHost *:8140> SSLEngine off ProxyPassMatch ^/([^/]+/certificate.*)$ https://puppet-master1.test.net:8140/$1 # Obtain Authentication Information from Client Request Headers SetEnvIf X-Client-Verify "(.*)" SSL_CLIENT_VERIFY=$1 SetEnvIf X-SSL-Client-DN "(.*)" SSL_CLIENT_S_DN=$1 DocumentRoot /etc/puppet/rack/public/ RackBaseURI / <Directory /etc/puppet/rack/> Options None AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> Cheers, Oli

    Read the article

  • CodePlex Daily Summary for Saturday, March 12, 2011

    CodePlex Daily Summary for Saturday, March 12, 2011Popular ReleasesXML Explorer: XML Explorer 4.0.2: Changes in 4.0: This release is built on the Microsoft .NET Framework 4 Client Profile. Changed XSD validation to use the schema specified by the XML documents. Added a VS style Error List, double-clicking an error takes you to the offending node. XPathNavigator schema validation finally gives SourceObject (was fixed in .NET 4). Added Namespaces window and better support for XPath expressions in documents with a default namespace. Added ExpandAll and CollapseAll toolbar buttons (in a...Mobile Device Detection and Redirection: 1.0.0.0: Stable Release 51 Degrees.mobi Foundation has been in beta for some time now and has been used on thousands of websites worldwide. We’re now highly confident in the product and have designated this release as stable. We recommend all users update to this version. New Capabilities MappingsTo improve compatibility with other libraries some new .NET capabilities are now populated with wurfl data: “maximumRenderedPageSize” populated with “max_deck_size” “rendersBreaksAfterWmlAnchor” populated ...ASP.NET MVC Project Awesome, jQuery Ajax helpers (controls): 1.7.3: A rich set of helpers (controls) that you can use to build highly responsive and interactive Ajax-enabled Web applications. These helpers include Autocomplete, AjaxDropdown, Lookup, Confirm Dialog, Popup Form, Popup and Pager added interactive search for the lookupWPF Inspector: WPF Inspector 0.9.7: New Features in Version 0.9.7 - Support for .NET 3.5 and 4.0 - Multi-inspection of the same process - Property-Filtering for multiple keywords e.g. "Height Width" - Smart Element Selection - Select Controls by clicking CTRL, - Select Template-Parts by clicking CTRL+SHIFT - Possibility to hide the element adorner (over the context menu on the visual tree) - Many bugfixes??????????: All-In-One Code Framework ??? 2011-03-10: http://download.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1codechs&DownloadId=216140 ??,????。??????????All-In-One Code Framework ???,??20?Sample!!????,?????。http://i3.codeplex.com/Project/Download/FileDownload.aspx?ProjectName=1code&DownloadId=128165 ASP.NET ??: CSASPNETBingMaps VBASPNETRemoteUploadAndDownload CS/VBASPNETSerializeJsonString CSASPNETIPtoLocation CSASPNETExcelLikeGridView ....... Winform??: FTPDownload FTPUpload MultiThreadedWebDownloader...Rawr: Rawr 4.1.0: Rawr is now web-based. The link to use Rawr4 is: http://elitistjerks.com/rawr.phpThis is the Cataclysm Release. More details can be found at the following link http://rawr.codeplex.com/Thread/View.aspx?ThreadId=237262 As of the 4.0.16 release, you can now also begin using the new Downloadable WPF version of Rawr!This is a Release of the WPF version, most of the general issues have been resolved. If you have a problem, please follow the Posting Guidelines and put it into the Issue Tracker. Whe...PHP Manager for IIS: PHP Manager 1.1.2 for IIS 7: This is a localization release of PHP Manager for IIS 7. It contains all the functionality available in 56962 plus a few bug fixes (see change list for more details). Most importantly this release is translated into five languages: German - the translation is provided by Christian Graefe Dutch - the translation is provided by Harrie Verveer Turkish - the translation is provided by Yusuf Oztürk Japanese - the translation is provided by Kenichi Wakasa Russian - the translation is provid...TweetSharp: TweetSharp v2.0.0: Documentation for this release may be found at http://tweetsharp.codeplex.com/wikipage?title=UserGuide&referringTitle=Documentation. Beta ChangesAdded user streams support Serialization is not attempted for Twitter 5xx errors Fixes based on feedback Third Party Library VersionsHammock v1.2.0: http://hammock.codeplex.com Json.NET 4.0 Release 1: http://json.codeplex.comMicrosoft All-In-One Code Framework - a centralized code sample library: Visual Studio 2008 Code Samples 2011-03-09: Code samples for Visual Studio 2008Office Web.UI: Version 2.4: After having lost all modifications done for 2.3. I finally did it again... Have a look at http://www.officewebui.com/change-log Also, the documentation continues to grow... http://www.officewebui.com/category/kb ThanksmyCollections: Version 1.3: New in version 1.3 : Added Editor management for Books Added Amazon API for Books Us, Fr, De Added Amazon Us, Fr, De for Movies Added The MovieDB for Fr and De Added Author for Books Added Editor and Platform for Games Added Amazon Us, De for Games Added Studio for XXX Added Background for XXX Bug fixing with Softonic API Bug fixing with IMDB UI improvement Removed GraceNote Added Amazon Us,Fr, De for Series Added TVDB Fr and De for Series Added Tracks for Musi...Facebook Graph Toolkit: Facebook Graph Toolkit 1.1: Version 1.1 (8 Mar 2011)new Dialog class for redirecting users to Facebook dialogs new Async publishing methods new Check for Extended Permissions option fixed bug: inappropiate condition of redirecting to login in Api class fixed bug: IframeRedirect method not workingpatterns & practices : Composite Services: Composite Services Guidance - CTP2: Overview The Composite Services guidance (codename Reykjavik) provides best practices and capabilities for applying industry-known SOA design patterns when building robust, connected, service-oriented composite enterprise applications. These capabilities are implemented as a set of reusable components for analytic tracing, service virtualization, metadata centralization and versioning, and policy centralization as well as exception management, included in this release. Changes in this CTP ...Python Tools for Visual Studio: 1.0 Beta 1: Beta 1You can't install IronPython Tools for Visual Studio side-by-side with Python Tools for Visual Studio. A race condition sometimes causes local MPI debugging to miss breakpoints. When MPI jobs on a cluster fail they don’t get cleaned up correctly, which can cause debugging to stall because the associated MPI job is stuck in the queue. The "Threads" view has a race condition which can cause it not to display properly at times. VS2010 shortcuts that are pinned to the taskbar are so...DotNetAge -a lightweight Mvc jQuery CMS: DotNetAge 2: What is new in DotNetAge 2.0 ? Completely update DJME to DJME2, enhance user experience ,more beautiful and more interactively visit DJME project home to lean more about DJME http://www.dotnetage.com/sites/home/djme.html A new widget engine has came! Faster and easiler. Runtime performance enhanced. SEO enhanced. UI Designer enhanced. A new web resources explorer. Page manager enhanced. BlogML supports added that allows you import/export your blog data to/from dotnetage publishi...Kooboo CMS: Kooboo CMS 3.0 Beta: Files in this downloadkooboo_CMS.zip: The kooboo application files Content_DBProvider.zip: Additional content database implementation of MSSQL,SQLCE, RavenDB and MongoDB. Default is XML based database. To use them, copy the related dlls into web root bin folder and remove old content provider dlls. Content provider has the name like "Kooboo.CMS.Content.Persistence.SQLServer.dll" View_Engines.zip: Supports of Razor, webform and NVelocity view engine. Copy the dlls into web root bin folder t...IronPython: 2.7 Release Candidate 2: On behalf of the IronPython team, I am pleased to announce IronPython 2.7 Release Candidate 2. The releases contains a few minor bug fixes, including a working webbrowser module. Please see the release notes for 61395 for what was fixed in previous releases.LINQ to Twitter: LINQ to Twitter Beta v2.0.20: Mono 2.8, Silverlight, OAuth, 100% Twitter API coverage, streaming, extensibility via Raw Queries, and added documentation.Minemapper: Minemapper v0.1.6: Once again supports biomes, thanks to an updated Minecraft Biome Extractor, which added support for the new Minecraft beta v1.3 map format. Updated mcmap to support new biome format.Sandcastle Help File Builder: SHFB v1.9.3.0 Release: This release supports the Sandcastle June 2010 Release (v2.6.10621.1). It includes full support for generating, installing, and removing MS Help Viewer files. This new release is compiled under .NET 4.0, supports Visual Studio 2010 solutions and projects as documentation sources, and adds support for projects targeting the Silverlight Framework. This release uses the Sandcastle Guided Installation package used by Sandcastle Styles. Download and extract to a folder and then run SandcastleI...New Projects8428-3127-6884-4328: No summary providedAngua R.P.G. Engine: The Angua R.P.G. Engine is a C# open-source system for playing turn-based Role-Playing Games over the Internet or TCP/IP based networks. AppServices - SOA for greenhorns: AppServices is a simple service container. It can inject services to any public or private property or field with declared ServiceAttribute. It supports Windows Forms, WPF and ASP.NET. Silverlight is not tested yet, but it should do also. AppServices simplifies service referenceAxHibernate: AxHibernate aims to produce a POCO-mapped style domain-library for Dynamics AX. I.e., an AX business-connector-ignorant ORM domain library for Dynamics Ax.Business localizer, translation: Translation and localization software used in your business. Help to localize resource file on top of .NET Microsoft stack.cloudsync: cloudsyncDaily Deals .Net - Groupon Clone: Daily Deals .Net is a "daily deals" application based on the .net technology. Utilizing MVC 3 to provide a Groupon-like experience. This project needs your support. Please sign up today to help bring a viable daily deals project to the .Net open source world. ENGUILABE.INFO - Descubre un nuevo mundo: codigo reutilizable y aprendibleeuler 40: euler 40euler 42: euler 42gibon: gibbon blog+portfolio+communitiHoley Ship: Battleship project.Improved CAPTCHA Control ASP.NET 2.0 C#: Improved C# port of Jeff Atwood's custom CAPTCHA control (image verification against robots).IslandUnit: IslandUnit helps you isolate the dependencies of your system with an fluent interface that makes easier to produce mocks and stubs with existing frameworks (Moq, NMock, NBuilder, AutoPoco) and put the isolated dependencies in IoC containers, leaving your system highly testable.KidsChores: Helps keep tracks of to do list for kids, allowing them to earn points, allowances, etc.Local Copy SharePoint Items: The "Local Copy SharePoint Items" is a PowerShell (v1) written script to grab all documents out of libraries/lists from a given SharePoint 2007 site. Get yourself exited to run this script as folliowing: .\LCSPI.ps1 -url http://moss2007 -bkploc c:\destination mkfashio: mkfashion MyTest: MyTestNaja, a Cobra IDE: Naja (a.k.a. Cobra IDE) is an Integrated Development Environment for the Cobra programming language (www.cobra-language.com). NDasm: Visual Studio add-in that displays IL code for any managed method in your .Net solution. Helpful for studying .Net platform. For example now you can easily see what happens when you are declaring delegate type or how your cool lambda expression actually looks like. olympicgameslondon: The aim of the Olympicgameslondon project is to provide a platform where sports professionals and athletes can broadcast themselves, individuals/supporters, Londoners, visitors etc can post views, share information and find interesting news about the London 2012 Olympic games.Populate SharePoint Form Fields from QueryString via Javascript: This project makes it possible to pre-fill SharePoint Edit form fields using JavaScript and field values passed via querystring - without server-side deployment.Project Server: Xin chào t?t c? các thành viên c?a team th?c t?p t?t nghi?p c?a công ty Toàn C?u Th?nh. Ðây là server du?c dùng d? giúp d? m?i ngu?i ch?nh s?a project c?a m?t m?t cách d? dàng nh?t trong quá trình làm vi?c nhóm. Support: ngtrongtri@yahoo.com.vnPure Midi: Pure Midi - Handling midi communication in .NET with full sequencing support and arranger like musical styles playing.Relative Time: Render a TimeSpan object as something that's consumable by humans like "about 9 minutes ago" or "yesterday".RPG Character Creation: RPG Character Creation makes it easier to create and maintain characters using the Avalon 2.0 ruleset. It is developed in C#.SearchBox for WPF: Customizable search box for WPF with auto-complete capabilities.Sebro: Sebro is a freelance-like platform for the SEO related market. The main feature of the platform is an automated tracking and statistics gathering of work and strongly formalized criteria of its completion or failure.ShoppingApp2: Project will cover basics of creating ASP.NET application. Aim of project is creating application which will help users, via web, to manage their home budget. Socket Redirection for Terminal Services: Socket Redirection for Terminal Services allows to connect to the Internet on a machine, which does not has access to I-net, using another machine in the network, which has that access. Sort Calculator: it is a c# sort calculator that generates randomly array numbers and implements several sorting algorithms.SpSource 2010: Port of SPSource to work with VS 2010 Tools for SharePoint 2010SQL Script Executer: SQL Executer makes it easy to deploy, execute multiple scripts on SQL Server as you could do in VS 2008. In Visual Studio 2008's Database Project, one had an option to select multiple sql files and Run them on choosen DB. Visual Studio 2010, doesn't come with this functionality. Test By Wire: Test by wire is a unit test framework, which handles automatic setup and orchestration of test-target and dependencies. In addition Test By Wire features an automatic mocking feature, that is interfaces with pure BDD style syntax. Time Manager System: The TMS (Time Manager System) allows you to plan, organize and schedule your daily activities. The TMS is based on the Pomodoro Technique that improves your productivity.Windows Azure Service Instances Auto Scaling: Windows Azure Service Instances Auto Scaling is a way for dynamically scaling-up and scaling-down the instances number of a running hosted service. In this version this management is done based on a time schedule by an Azure Worker Role.

    Read the article

  • SQL Server Transaction Marks: Restoring multiple databases to a common relative point

    - by Mladen Prajdic
    We’re all familiar with the ability to restore a database to point in time using the RESTORE WITH STOPAT statement. But what if we have multiple databases that are accessed from one application or are modifying each other? And over multiple instances? And all databases have different workloads? And we want to restore all of the databases to some known common relative point? The catch here is that this common relative point isn’t the same point in time for all databases. This common relative point in time might be now in DB1, now-1 hour in DB2 and yesterday in DB3. And we don’t know the exact times. Let me introduce you to Transaction Marks. When we run a marked transaction using the WITH MARK option a flag is set in the transaction log and a row is added to msdb..logmarkhistory table. When restoring a transaction log backup we can restore to either before or after that marked transaction. The best thing is that we don’t even need to have one database modifying another database. All we have to do is use a marked transaction with the same name in different database. Let’s see how this works with an example. The code comments say what’s going on. USE master GOCREATE DATABASE TestTxMark1GOUSE TestTxMark1GOCREATE TABLE TestTable1( ID INT, VALUE UNIQUEIDENTIFIER) -- insert some data into the table so we can have a starting pointINSERT INTO TestTable1SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NULLFROM master..spt_valuesORDER BY RNSELECT *FROM TestTable1GO-- TAKE A FULL BACKUP of the databseBACKUP DATABASE TestTxMark1 TO DISK = 'c:\TestTxMark1.bak'GO USE master GOCREATE DATABASE TestTxMark2GOUSE TestTxMark2GOCREATE TABLE TestTable2( ID INT, VALUE UNIQUEIDENTIFIER)-- insert some data into the table so we can have a starting pointINSERT INTO TestTable2SELECT ROW_NUMBER() OVER(ORDER BY number) AS RN, NEWID()FROM master..spt_valuesORDER BY RNSELECT *FROM TestTable2GO-- TAKE A FULL BACKUP of our databseBACKUP DATABASE TestTxMark2 TO DISK = 'c:\TestTxMark2.bak'GO -- start a marked transaction that modifies both databasesBEGIN TRAN TxDb WITH MARK -- update values from NULL to random value UPDATE TestTable1 SET VALUE = NEWID(); -- update first 100 values from random value -- to NULL in different DB UPDATE TestTxMark2.dbo.TestTable2 SET VALUE = NULL WHERE ID <= 100;COMMITGO     -- some time goes by here -- with various database activity... -- We see two entries for marks in each database. -- This is just informational and has no bearing on the restore itself.SELECT * FROM msdb..logmarkhistory USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark1 TO DISK = 'c:\TestTxMark1.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark1GO USE masterGO-- create a log backup to restore to mark pointBACKUP LOG TestTxMark2 TO DISK = 'c:\TestTxMark2.trn'GO-- drop the database so we can restore it backDROP DATABASE TestTxMark2GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark1 FROM DISK = 'c:\TestTxMark1.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark1 FROM DISK = 'c:\TestTxMark1.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO -- RESTORE THE DATABASE BACK BEFORE OUR TRANSACTION-- restore the full backup RESTORE DATABASE TestTxMark2 FROM DISK = 'c:\TestTxMark2.bak' WITH NORECOVERY;-- restore the log backup to the transaction markRESTORE LOG TestTxMark2 FROM DISK = 'c:\TestTxMark2.trn' WITH RECOVERY, -- recover to state before the transaction STOPBEFOREMARK = 'TxDb'; -- recover to state after the transaction -- STOPATMARK = 'TxDb';GO USE TestTxMark1-- we restored to time before the transaction -- so we have NULL values in our tableSELECT * FROM TestTable1 USE TestTxMark2-- we restored to time before the transaction -- so we DON'T have NULL values in our tableSELECT * FROM TestTable2   Transaction marks can be used like a crude sync mechanism for cross database operations. With them we can mark our databases with a common “restore to” point so we know we have a valid state between all databases to restore to.

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-30

    - by Bob Rhubart
    Next Generation Mobile Clients for Oracle Applications & the role of Oracle Fusion Middleware | Manish Palaparthy Manish Palaparthy examines some of Oracle's mobile applications, and takes a look at the underlying technology. Master Data Management: A Foundation for Big Data Analysis | Manouj Tahiliani "Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data." — Manouj Tahiliani Architect Day: Boston - Agenda Update Here's the latest updated information on the session schedule and content for Oracle Technology Network Architect Day in Boston, MA on September 12, 2012. Registration is open, but seating is limited. OTN Architect Day: Boston is being held on Wednesday September 12, 2012, 8:00 a.m. – 5:00 p.m., at the Boston Marriott Burlington, One Burlington Mall Road, Burlington, MA 01803. Integrating Coherence & Java EE 6 Applications using ActiveCache | Ricardo Ferreira The seamless integration between Oracle Coherence and Oracle WebLogic Server "provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency," explains Ricardo Ferreira, "since Oracle provides a DI (Dependency Injection) mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache." Ricardo shows you how to configure ActiveCache in WebLogic and your Java EE application. Cloud Infrastructure has a new standard from the DMTF "Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, [Cloud Infrastructure Management Interface (CIMI)] is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience." Oracle GoldenGate 11g Release Launch Webcast- September 12 The new release of Oracle GoldenGate 11g is now available for major databases and platforms. Register for this webcast and live Q&A with product experts to learn about the solution's new features. September 12, 2012. 8:00am AM and 10:00AM PT. Speakers: Doug Reid (Director, Product Management, Oracle GoldenGate), Irem Radzik (Director, Product Marketing, Oracle Data Integration Products) Thought for the Day "[When] asking skilled architects…what they do when confronted with highly complex problems… [they] would most likely answer, 'Just use Common Sense.' [A] better expression than 'common sense' is 'contextual sense'—a knowledge of what is reasonable within a given context. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • Bonding: works only for download

    - by Crazy_Bash
    I would like to install bonding with 4 links with mode 4. but only "download/receiving" works with bondig. for transmitting the system chooses one link. ifconfig bond0 Link encap:Ethernet HWaddr 90:E2:BA:0F:76:B4 inet addr:ip Bcast:ip Mask:255.255.255.248 inet6 addr: fe80::92e2:baff:fe0f:76b4/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:239187413 errors:0 dropped:10944 overruns:0 frame:0 TX packets:536902370 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14688536197 (13.6 GiB) TX bytes:799521192901 (744.6 GiB) eth2 Link encap:Ethernet HWaddr 90:E2:BA:0F:76:B4 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:54969488 errors:0 dropped:0 overruns:0 frame:0 TX packets:2537 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3374778591 (3.1 GiB) TX bytes:314290 (306.9 KiB) eth3 Link encap:Ethernet HWaddr 90:E2:BA:0F:76:B4 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:64935805 errors:0 dropped:1 overruns:0 frame:0 TX packets:2532 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3993499746 (3.7 GiB) TX bytes:313968 (306.6 KiB) eth4 Link encap:Ethernet HWaddr 90:E2:BA:0F:76:B4 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:57352105 errors:0 dropped:2 overruns:0 frame:0 TX packets:536894778 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3524236530 (3.2 GiB) TX bytes:799520265627 (744.6 GiB) eth5 Link encap:Ethernet HWaddr 90:E2:BA:0F:76:B4 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:61930025 errors:0 dropped:3 overruns:0 frame:0 TX packets:2540 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3796021948 (3.5 GiB) TX bytes:314274 (306.9 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:62 errors:0 dropped:0 overruns:0 frame:0 TX packets:62 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:5320 (5.1 KiB) TX bytes:5320 (5.1 KiB) those are my configs: DEVICE="eth2" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE="eth3" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE="eth4" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE="eth5" BOOTPROTO="none" MASTER=bond0 SLAVE=yes USERCTL=no NM_CONTROLLED="no" ONBOOT="yes" DEVICE=bond0 IPADDR=<ip> BROADCAST=<ip> NETWORK=<ip> GATEWAY=<ip> NETMASK=<ip> USERCTL=no BOOTPROTO=none ONBOOT=yes NM_CONTROLLED=no cat /proc/net/bonding/bond0 Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011) Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash Policy: layer2 (0) MII Status: up MII Polling Interval (ms): 100 Up Delay (ms): 0 Down Delay (ms): 0 802.3ad info LACP rate: slow Aggregator selection policy (ad_select): stable Active Aggregator Info: Aggregator ID: 1 Number of ports: 4 Actor Key: 17 Partner Key: 11 Partner Mac Address: 00:24:51:12:63:00 Slave Interface: eth2 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 90:e2:ba:0f:76:b4 Aggregator ID: 1 Slave queue ID: 0 Slave Interface: eth3 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 90:e2:ba:0f:76:b5 Aggregator ID: 1 Slave queue ID: 0 Slave Interface: eth4 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 90:e2:ba:0f:76:b6 Aggregator ID: 1 Slave queue ID: 0 Slave Interface: eth5 MII Status: up Speed: 1000 Mbps Duplex: full Link Failure Count: 0 Permanent HW addr: 90:e2:ba:0f:76:b7 Aggregator ID: 1 Slave queue ID: 0 /etc/modprobe.d/bonding.conf alias bond0 bonding options bond0 mode=4 miimon=100 updelay=200 #downdelay=200 xmit_hash_policy=layer3+4 lacp_rate=1 Linux: Linux 3.0.0+ #1 SMP Fri Oct 26 07:55:47 EEST 2012 x86_64 x86_64 x86_64 GNU/Linux what i've tried: downdelay=200 xmit_hash_policy=layer3+4 lacp_rate=1 mode 6

    Read the article

  • Bind9 Debian Not responding

    - by Marc
    Im trying to set up a webserver with Bind9, apache2 on Debian 6. I am trying to learn to do it manualy so I do not have any control panels or anything just the command line. I have a domain name lets call it www.example.com I want a virtual host setup so that I can have multiple websites with different names on my server. I have ns1.example.com and ns2.example.com registered at my servers IP (123.456.789.12). Below is my Bind9 named.conf.options options { directory "/var/cache/bind"; // If there is a firewall between you and nameservers you want // to talk to, you may need to fix the firewall to allow multiple // ports to talk. See http://www.kb.cert.org/vuls/id/800113 // If your ISP provided one or more IP addresses for stable // nameservers, you probably want to use them as forwarders. // Uncomment the following block, and insert the addresses replacing // the all-0's placeholder. // forwarders { // 0.0.0.0; // }; auth-nxdomain no; # conform to RFC1035 listen-on-v6 { any; }; }; This is the default I'm not sure if i was supposed to edit it. I didn't. Here is my named.conf.default-zones: // prime the server with knowledge of the root servers zone "." { type hint; file "/etc/bind/db.root"; }; // be authoritative for the localhost forward and reverse zones, and for // broadcast zones as per RFC 1912 zone "localhost" { type master; file "/etc/bind/db.local"; }; zone "127.in-addr.arpa" { type master; file "/etc/bind/db.127"; }; zone "0.in-addr.arpa" { type master; file "/etc/bind/db.0"; }; zone "255.in-addr.arpa" { type master; file "/etc/bind/db.255"; }; zone "example.com.com" { type master; file "etc/bind/example.com.db"; }; named.conf.local Is an empty file with a comment saying to do local configuration here. example.com.db looks like this: ; BIND data file for mywebsite.com ; $ORIGIN example.com. $TTL 604800 @ IN SOA ns1.example.com. [email protected]. ( 2009120101 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; IN NS ns1.example.com. IN NS ns2.example.com. IN MX 10 mail.example.com. localhost IN A 127.0.0.1 example.com. IN A 123.456.789.12 ns1 IN A 123.456.789.12 ns2 IN A 123.456.789.12 www IN A 123.456.789.12 ftp IN A 123.456.789.12 mail IN A 123.456.789.12 boards IN CNAME www These are all settings I've found from various tutorials. Now when i go to intodns I get: You should already know that your NS records at your nameservers are missing, so here it is again: ns1.example.com ns2.example.com Can someone help me? I'm not sure what Im doing wrong.

    Read the article

  • ?????????????Fusion Middleware??????????? ?3?

    - by rika.tokumichi
    ??????????OTN????????? ??OTN???????Fusion Middleware???????????????????????????????????????????????? ?OTN????????????????????????????? 2010?2????????????????? ????????????????????????????? ????????????? ???! ????? ??????????????????????????? ?????????????????Middleware???????????????????(;´?`A`` ????????????????????????? ?Weblogic Server???? ???????? ??????????????????????? Java?????????????????????WebLogic Server, JRockit??????????????????????????????????????????????????????????? ???????????????????????????????????!! ??????????8?Weblogic Server???@????????????????????????????????????????????????????? >??8?Weblogic Server???@??? ??????????????????????????! >WebLogic Server??? 2009?????2010???? ???? ??????????????????????????????????? ????????????????????????^^ >[2009?12??]- 2009???? - WebLogic Server???????????! ???????????????????????????????????????????! ???~????????????????????? ???1????????????????????????????????????! >WebLogic -OTN???? ???????????- ???????(??): ?????????????WebLogic Server????????????? ????????????????????~Oracle WebLogic Server 11g~ Oracle WebLogic Server???????Web??????? -??? ???!!????????·???????!?~Oracle Weblogic Server ???~ >Middleware -OTN???? ???????????- ???????(??): Web?????????/????????????? - ??????????????????? - Oracle SOA Suite 11g ???? Oracle Coherence ???? Oracle Tuxedo ???? ??????????????Middleware??????????Twitter????????????????????????? oraclemiddle_jp ??????????! ???? ??????????????? ????????? ??????????OTN-Japan???????Oracle University????????????? WebLogic????? ------------------------------------------ Oracle WebLogic Server 10g System Administrator Certified Expert ?????????????????Web??????????????????WebLogic Server??????????????????????????????????????? Oracle WebLogic Server 10g Developer Certified Expert Servlet/JSP?EJB???J2EE????????Web???????????(IDE)????????????????????????MVC??????????????????????????????????Web?????????????????????????????????? >???????? ------------------------------------------ Oracle Application Server????? ------------------------------------------ ORACLE MASTER Application Server???????????????????????????????Oracle Application Servers??????????????????? ORACLE MASTER Silver Oracle Application Server 10g(OCA) ORACLE MASTER Gold Oracle Application Server 10g(OCP) >???????? ------------------------------------------ ?????? >ORACLE MASTER Oracle Middleware???? ?????????????????????????????????? ??????????????????????? ?????????????????????????????????????????????????????????????????3????????·???·????????????????????! ????????????????????????^^ >[2010?3??]????·???·??????????&????2010????! ???????????????????????????????????????????????? ??????????!????????????????????????????????????? ???OTN-Japan??????????????????????Twitter????????????????????????????! OTN Japan Twitter????????? ?????????????????????Fusion Middleware??????????? ?3??????????? ??????????????????????! Fusion Middleware???????????! ?INFORMATION INDEPTH NEWSLETTERS Fusion Middleware Edition???????! ??1?OTN???????Fusion Middleware???????????????? ???????????????????????????????????????????????? ???????????????????????!! ?????????? Oracle/OTN????????????????????????????????????????????????? ????????????????????????????????????? INFORMATION INDEPTH NEWSLETTERS Fusion Middleware Edition??????????????????????? >???????????????????????? ??????????????????????Oracle's Dev2DBA Newsletter?????????????^^ >?Oracle's Dev2DBA Newsletter?????????????????????????????????

    Read the article

  • Asp.net mvc 2 .net 4.0 error when View model type is Tuple with more than 4 items

    - by Bojan
    When I create strongly typed View in Asp.net mvc 2, .net 4.0 with model type Tuple I get error when Tuple have more than 4 items example 1: type of view is Tuple<string, string, string, string> (4-tuple) and everything works fine view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/WebUI.Master" Inherits="System.Web.Mvc.ViewPage<Tuple<string, string, string, string>>" %> controller: var tuple = Tuple.Create("a", "b", "c", "d"); return View(tuple); example 2: type of view is Tuple<string, string, string, string, string> (5-tuple) and I have this error: Compiler Error Message: CS1003: Syntax error, '>' expected view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/WebUI.Master" Inherits="System.Web.Mvc.ViewPage<Tuple<string, string, string, string, string>>" %> controller: var tuple = Tuple.Create("a", "b", "c", "d", "e"); return View(tuple); example 3 if my view model is of type dynamic I can use both 4-tuple and 5-tuple and there is no error on page view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/WebUI.Master" Inherits="System.Web.Mvc.ViewPage<dynamic>" %> controller: dynamic model = new ExpandoObject(); model.tuple = Tuple.Create("a", "b", "c", "d"); return View(model); or view: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/WebUI.Master" Inherits="System.Web.Mvc.ViewPage<dynamic>" %> controller: dynamic model = new ExpandoObject(); model.tuple = Tuple.Create("a", "b", "c", "d", "e"); return View(model); Even if I have something like Tuple<string, Tuple<string, string, string>, string> 3-tuple and one of the items is also a tuple and sum of items in all tuples is more than 4 I get the same error, Tuple<string, Tuple<string, string>, string> works fine

    Read the article

  • Linq to SQL with INSTEAD OF Trigger and an Identity Column

    - by Bob Horn
    I need to use the clock on my SQL Server to write a time to one of my tables, so I thought I'd just use GETDATE(). The problem is that I'm getting an error because of my INSTEAD OF trigger. Is there a way to set one column to GETDATE() when another column is an identity column? This is the Linq-to-SQL: internal void LogProcessPoint(WorkflowCreated workflowCreated, int processCode) { ProcessLoggingRecord processLoggingRecord = new ProcessLoggingRecord() { ProcessCode = processCode, SubId = workflowCreated.SubId, EventTime = DateTime.Now // I don't care what this is. SQL Server will use GETDATE() instead. }; this.Database.Add<ProcessLoggingRecord>(processLoggingRecord); } This is the table. EventTime is what I want to have as GETDATE(). I don't want the column to be null. And here is the trigger: ALTER TRIGGER [Master].[ProcessLoggingEventTimeTrigger] ON [Master].[ProcessLogging] INSTEAD OF INSERT AS BEGIN SET NOCOUNT ON; SET IDENTITY_INSERT [Master].[ProcessLogging] ON; INSERT INTO ProcessLogging (ProcessLoggingId, ProcessCode, SubId, EventTime, LastModifiedUser) SELECT ProcessLoggingId, ProcessCode, SubId, GETDATE(), LastModifiedUser FROM inserted SET IDENTITY_INSERT [Master].[ProcessLogging] OFF; END Without getting into all of the variations I've tried, this last attempt produces this error: InvalidOperationException Member AutoSync failure. For members to be AutoSynced after insert, the type must either have an auto-generated identity, or a key that is not modified by the database after insert. I could remove EventTime from my entity, but I don't want to do that. If it was gone though, then it would be NULL during the INSERT and GETDATE() would be used. Is there a way that I can simply use GETDATE() on the EventTime column for INSERTs? Note: I do not want to use C#'s DateTime.Now for two reasons: 1. One of these inserts is generated by SQL Server itself (from another stored procedure) 2. Times can be different on different machines, and I'd like to know exactly how fast my processes are happening.

    Read the article

  • Why isn't my UITableView in a popover appearing in the correct scroll position?

    - by zbrimhall
    I have a split view-based app that presents a master-detail interface, and uses a popover to present the master list when in portrait mode. The popover presents a sectioned table view that ultimately gets populated by a subclass of NSFetchedResultsController. I can tap the tool bar button to present the master list, scroll to whatever row, and tap the row to dismiss the popover. My problem is that if the table is scrolled past the top of the second section, when I dismiss the popover and then later tap the toolbar button to re-present it, the table's scroll position is always set such that the first row of the second section is at the top of the list. If I haven't scrolled past the top of the second section, it correctly remembers its scroll position when the table is presented again. Similarly, in landscape mode, if I scroll the table past the top of the third section and then rotate to portrait, when I come back to landscape the scroll position is always set such that the first row of the third section is at the top of the list. I tried calling -scrollToNearestSelectedRowAtScrollPosition:animated in both the master view controller's -viewWillAppear, as well as in the split view delegate's splitViewController:popoverController:willPresentViewController:, to no effect. Anybody have a clue what I might be doing wrong?

    Read the article

  • Git checkout doesn't change anything, and it's getting very frustrating

    - by Josh
    I really like git. At least, I like the idea of git. Being able to checkout my master project as a separate branch where I can change whatever I want without risk of screwing everything else up is awesome. But it's not working. Every time I checkout a branch to another branch, make changes to the one branch, and then checkout the original branch, I still have all the files and changes that happened in the other branch. This is getting extremely frustrating. I've read that this can happen when you have files open in the IDE while doing this, but I've been pretty careful about that and both closed the files in the IDE, closed the IDE, and shut down my rails server before switching branches, and this still happens. Also, running 'git clean -f' either deletes everything that happened after some arbitrary commit (and randomly, at that), or, as in the latest case, didn't change anything back to its original state. I thought I was using git correctly, but at this point, I'm at my wit's end here. I'm trying to work with a bunch of experimental code using a stable version of my project, but I keep having to manually track down and fix all the changes I made. Any ideas or suggestions? git checkout -b photo_tagging git branch # to make sure it's right # make a bunch of changes, creations, etc git status # see what's changed since before git add . # approve of the changes, I guess, since if I do git commit after this, it says no changes git commit -m 'these are changes I made' git checkout master git branch #=> *master # look at files, tags_controller is still there, added in photo_tagging # and code added in photo_tagging branch are still there in *master This seems to happen whether I do a commit or not on the branch.

    Read the article

  • Best way to fork SVN project with Git

    - by Jeremy Thomerson
    I have forked an SVN project using Git because I needed to add features that they didn't want. But at the same time, I wanted to be able to continue pulling in features or fixes that they added to the upstream version down into my fork (where they don't conflict). So, I have my Git project with the following branches: master - the branch I actually build and deploy from feature_* - feature branches where I work or have worked on new things, which I then merge to master when complete vendor-svn - my local-only git-svn branch that allows me to "git svn rebase" from their svn repo vendor - my local branch that i merge vendor-svn into. then i push this (vendor) branch to the public git repo (github) So, my flow is something like this: git checkout vendor-svn git svn rebase git checkout vendor git merge vendor-svn git push origin vendor Now, the question comes here: I need to review each commit that they made (preferably individually since at this point I'm about twenty commits behind them) before merging them into master. I know that I could run git checkout master; git merge vendor, but this would pull in all changes and commit them, without me being able to see if they conflict with what I need. So, what's the best way to do this? Git seems like a great tool for handling forks of projects since you can pull and push from multiple repos - I'm just not experienced with it enough to know the best way of doing this. Here's the original SVN project I'm talking about: https://appkonference.svn.sourceforge.net/svnroot/appkonference My fork is at github.com/jthomerson/AsteriskAudioKonf (sorry - I couldn't make it a link since I'm a new user here)

    Read the article

  • In git, how can I get the diff between two dates?

    - by Weidenrinde
    Basically, I am looking for something equivalent to cvs diff -D"1 day ago" -D"2010-02-29 11:11". While collecting more and more information, I found a solution. I paste it here, for others that might have similar problems. Things I have tried: In git, how can I get the diff between all the commits that occured between two dates? was ansered here with: git whatchanged --since="1 day ago" -p But this gives a diff for each commit, even if there are multiple commits in one file. I know that "date" is a bit of a loose concept in git, I thought there must be some way to do this. git diff 'master@{1 day ago}..master gives some warning warning: Log for 'master' only goes back to Tue, 16 Mar 2010 14:17:32 +0100. and does not show old diffs. git format-patch --since=yesterday --stdout does not give anything. revs=$(git log --pretty="format:%H" --since="1 day ago");git diff $(echo "$revs"|tail -n1) $(echo "$revs"|head -n1) works somehow, but seems complicated and does not restrict to the current branch. git diff $(git rev-list -n1 --before="1 day ago" master) seems to work and a default way to do similar things, although more complicated than I thought. Funnily, git-cvsserver does not support "cvs diff -D" (without that it is documented somewhere).

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >