Search Results

Search found 21955 results on 879 pages for 'self host'.

Page 391/879 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • SSH connect error

    - by DMDGeeker
    I use a notebook with Ubuntu 12.10 and try to connect a server with Ubuntu 12.04. The server has already installed openssh-server. And allow publick key and password to login. But I connect the server sometime well but after minutes it will be error. First, it will show me these messages:    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!    ......    Add correct host key in /home/myname/.ssh/known_hosts to get rid of this message. but I never reinstall system and the openssh-server. Both are never changed! The server nerver shutdown or reboot. Second, after I remove the relative key from my known_hosts and use ssh connect the server again, it will let me type my password. then my nightmare coming... Permission denied (publickey,password) But I typed the correct password! PS: I used password and public key both success. But the problem will appear again after i logout then login.

    Read the article

  • 2008R2 Standard and Hyper-V and Ram Usage (Usable vs Available)

    - by Mark
    A new server was purchased for our development team to start utilizing the full feature set of TFS, namely Lab Management. Because of the need for Lab Management we bought a fairly beefy machine to handle this task and to also act as a build machine. I have been tasked to setup additional features TFS on this machine starting out with a build controller and eventually going towards a full out Lab Management setup using Hyper-V. My question: Upon initially logging I noticed that Windows is registering 64gb but only 32gb available. I know this is a limitation because of licencing since only Standard Edition is installed. Since Hyper-V is another layer that handles the virtualization of guest OS's is Hyper-V able to access this memory? Or is Hyper-V memory usage also limited by 2008 R2 Standard? If Hyper-V can somehow access this memory, is this how it should be setup? Or should the host 2008R2 Standard be upgraded to Enterprise so the Host can utilize the full 64gb? Before I go hog wild and using TFS I wanted to ask some experts so I don't need to reinstall the OS down the road to utilize the additional 32gb. Thanks for any help or links you can share.

    Read the article

  • Nginx vhost configuration

    - by user101494
    I am attempting to setup a new server with Nginx 1.0.10 on debian 6. The config below works perfectly on a server with nginx 0.8.36 on Ubuntu 10.04.3 but not on the new box. The desired result is to: Redirect non-www request on the tld to www, but not not subdomains Use the the folder structure /var/www/[domain]/htdocs /var/www/[domain]/subdomains/[subdomain]/htdocs Serve files any host for which files exist in this structure On the new server domains are matching correctly but subdomains are matching to /var/www/[subdomain].[domain]/htdocs not /var/www/[domain]/subdomains/[subdomain]/htdocs server { listen 80; server_name _________ ~^[^.]+\.[^.]+$; rewrite ^(.*)$ $scheme://www.$host$1 permanent; } server { listen 80; server_name _ ~^www\.(?<domain>.+)$; server_name_in_redirect off; location / { root /var/www/$domain/htdocs; index index.html index.htm index.php; fastcgi_index index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location ~ /\.ht { deny all; } } server { listen 80; server_name __ ~^(?<subdomain>\.)?(?<domain>.+)$$; server_name_in_redirect off; location / { root /var/www/$domain/subdomains/$subdomain/htdocs; index index.html index.htm index.php; fastcgi_index index.php; } location ~ \.php$ { include /etc/nginx/fastcgi_params; keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location ~ /\.ht { deny all; } }

    Read the article

  • schroot build environment setup how to avoid bind-mount home

    - by minghua
    The recent linux distributions such as Fedora and Ubuntu all use chroot environment to make the build. Because when making the build often it needs to install some special tools, and to install to the existing system. Using chroot avoids making any changes to the host system. To set up such a build environment, the first step is to make a chroot. I'm following the setup guide at https://wiki.debian.org/Schroot [wheezy-test] description=Contains the SPICE program aliases=test type=directory directory=/srv/chroot/test users=jsmith root-groups=root script-config=desktop/config personality=linux preserve-environment=true In the host on my setup the /home is on /dev/mapper. When schroot is entered, the same home is bind-mounted. Is there a way to avoid this? I prefer to use a different /home inside chroot. When changing the type from directory to plain, the binding is not performed. However that also loses /proc, /sys, etc. You'd have to manually bind-mount them. That does not seem to be a good solution. If a simple configuration change is unavailable, any idea where the script is for type=directory? Probably I'll manually modify the script. Thanks in advance for any answers or hints!

    Read the article

  • Very diferent results from df after a few seconds

    - by tatus2
    When the backup moves the files from one server to the other the results from df change every few seconds in an impossible manner. The source host is running rsync. On the destination host I'm running the following command every few seconds: echo `date` `df|grep md0` Results are below: Sat Jun 29 23:57:12 CEST 2013 /dev/md0 4326425568 579316100 3527339636 15% /MD0 Sat Jun 29 23:57:14 CEST 2013 /dev/md0 4326425568 852513700 3254142036 21% /MD0 Sat Jun 29 23:57:15 CEST 2013 /dev/md0 4326425568 969970340 3136685396 24% /MD0 Sat Jun 29 23:57:17 CEST 2013 /dev/md0 4326425568 1255222180 2851433556 31% /MD0 Sat Jun 29 23:57:20 CEST 2013 /dev/md0 4326425568 1276006720 2830649016 32% /MD0 Sat Jun 29 23:57:24 CEST 2013 /dev/md0 4326425568 1355440016 2751215720 34% /MD0 Sat Jun 29 23:57:26 CEST 2013 /dev/md0 4326425568 1425090960 2681564776 35% /MD0 Sat Jun 29 23:57:27 CEST 2013 /dev/md0 4326425568 1474601872 2632053864 36% /MD0 Sat Jun 29 23:57:28 CEST 2013 /dev/md0 4326425568 1493627384 2613028352 37% /MD0 Sat Jun 29 23:57:32 CEST 2013 /dev/md0 4326425568 615934400 3490721336 15% /MD0 Sat Jun 29 23:57:33 CEST 2013 /dev/md0 4326425568 636071360 3470584376 16% /MD0 As you can see I start from USE of 15% and after 15 seconds I'm at 37% (I don't need to mention that the backup can not copy this huge amount of data in such a short time). After ~20 seconds the cycle closes. I'm again roughly at the same usage as earlier. The value is reasonable, ca. 35 Mb were copied. Can somebody explain to me what is going on? Does df only make an estimation of usage instead of used value?

    Read the article

  • WebLogic job scheduling

    - by XpiritO
    Hello, overflowers :) I'm trying to implement a WebLogic job scheduling example, to test my cluster capabilities of fail-over on scheduled tasks (to ensure that these tasks are executed on fail over scenario). With this in mind, I've been following this example and trying to configure everything accordingly. Here are the steps I've done so far: Configured a cluster with 1 admin server (AdminServer) and 2 managed instances (Noddy and Snoopy); Set up database tables (using Oracle XE): ACTIVE and WEBLOGIC_TIMERS; Set up data source to access DB and associated it to the scheduling tasks under "Settings for cluster" "Scheduling"; Implemented a job (TimerListener) and a servlet to initialize the job scheduling, as follows: . package timedexecution; import java.io.IOException; import java.io.PrintWriter; import java.io.Serializable; import java.text.SimpleDateFormat; import java.util.Date; import javax.naming.InitialContext; import javax.naming.NamingException; import javax.servlet.ServletException; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import commonj.timers.Timer; import commonj.timers.TimerListener; import commonj.timers.TimerManager; public class TimerServlet extends HttpServlet { private static final long serialVersionUID = 1L; protected static void logMessage(String message, PrintWriter out){ out.write("<p>"+ message +"</p>"); System.out.println(message); } @Override public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { PrintWriter out = response.getWriter(); // out.println("<html>"); out.println("<head><title>TimerServlet</title></head>"); // try { // logMessage("service() entering try block to intialize the timer from JNDI", out); // InitialContext ic = new InitialContext(); TimerManager jobScheduler = (TimerManager)ic.lookup("weblogic.JobScheduler"); // logMessage("jobScheduler reference " + jobScheduler, out); // jobScheduler.schedule(new ExampleTimerListener(), 0, 30*1000); // logMessage("Timer scheduled!", out); // //execute this job every 30 seconds logMessage("service() started the timer", out); // logMessage("Started the timer - status:", out); // } catch (NamingException ne) { String msg = ne.getMessage(); logMessage("Timer schedule failed!", out); logMessage(msg, out); } catch (Throwable t) { logMessage("service() error initializing timer manager with JNDI name weblogic.JobScheduler " + t,out); } // out.println("</body></html>"); out.close(); } private static class ExampleTimerListener implements Serializable, TimerListener { private static final long serialVersionUID = 8313912206357147939L; public void timerExpired(Timer timer) { SimpleDateFormat sdf = new SimpleDateFormat(); System.out.println( "timerExpired() called at " + sdf.format( new Date() ) ); } } } Then I executed the servlet to start the scheduling on the first managed instance (Noddy server), which returned as expected: (Servlet execution output) service() entering try block to intialize the timer from JNDI jobScheduler reference weblogic.scheduler.TimerServiceImpl@43b4c7 Timer scheduled! service() started the timer Started the timer - status: Which resulted in the creation of 2 rows in my DB tables: WEBLOGIC_TIMERS table state after servlet execution: "EDIT"; "TIMER_ID"; "LISTENER"; "START_TIME"; "INTERVAL"; "TIMER_MANAGER_NAME"; "DOMAIN_NAME"; "CLUSTER_NAME"; ""; "Noddy_1268653040156"; "[datatype]"; "1268653040156"; "30000"; "weblogic.JobScheduler"; "myCluster"; "Cluster" ACTIVE table state after servlet execution: "EDIT"; "SERVER"; "INSTANCE"; "DOMAINNAME"; "CLUSTERNAME"; "TIMEOUT"; ""; "service.SINGLETON_MASTER"; "6382071947583985002/Noddy"; "QRENcluster"; "Cluster"; "10.03.15" Although, the job is not executed as scheduled. It should print a message on the server's log output (Noddy.out file) with a timestamp, saying that the timer had expired. It doesn't. My log files state as follows: Admin server log (myCluster.log file): ####<15/Mar/2010 10H45m GMT> <Warning> <Cluster> <test-ad> <Noddy> <[STANDBY] ExecuteThread: '1' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268649925727> <BEA-000192> <No currently living server was found that could host TimerMaster. The server will retry in a few seconds.> Noddy server log (Noddy.out file): service() entering try block to intialize the timer from JNDI jobScheduler reference weblogic.scheduler.TimerServiceImpl@43b4c7 Timer scheduled! service() started the timer Started the timer - status: <15/Mar/2010 10H45m GMT> <Warning> <Cluster> <BEA-000192> <No currently living server was found that could host TimerMaster. The server will retry in a few seconds.> (Noddy.log file): ####<15/Mar/2010 11H24m GMT> <Info> <Common> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268652270128> <BEA-000628> <Created "1" resources for pool "TxDataSourceOracle", out of which "1" are available and "0" are unavailable.> ####<15/Mar/2010 11H37m GMT> <Info> <Cluster> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '0' for queue: 'weblogic.kernel.Default (self-tuning)'> <<anonymous>> <> <> <1268653040226> <BEA-000182> <Job Scheduler created a job with ID Noddy_1268653040156 for TimerListener with description timedexecution.TimerServlet$ExampleTimerListener@2ce79a> ####<15/Mar/2010 11H39m GMT> <Info> <JDBC> <test-ad> <Noddy> <[ACTIVE] ExecuteThread: '3' for queue: 'weblogic.kernel.Default (self-tuning)'> <<WLS Kernel>> <> <> <1268653166307> <BEA-001128> <Connection for pool "TxDataSourceOracle" closed.> Can anyone help me out discovering what's wrong with my configuration? Thanks in advance for your help!

    Read the article

  • how to update database table in sqlite3 iphone

    - by Ajeet Kumar Yadav
    Hi I am new in Iphone i am developing application that is use database but i am face problem. In My application I am using table view. I am fetch the value from database in the table view this is done after that we also insert value throw text field in that table of database and display that value in table view.I am also use another table in database table name is Alootikki the value of table alootikki is display in other table view in application and user want to add this table alootikki value in first table of database and display that value in table view when we do this value is display only in the table view this is not write in the database table. when value is display and user want to add other data throw text field then only that add value is show first value is remove from table view. I am not able to solve this plz help me. The code is given bellow for database -(void)Data1 { //////databaseName = @"dataa.sqlite"; databaseName = @"Recipe1.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(addStmt == nil) { ////////////const char *sql = "insert into Dataa(item) Values(?)"; const char *sql = " insert into Slist(Incredients ) Values(?)"; if(sqlite3_prepare_v2(database, sql, -1, &addStmt, NULL) != SQLITE_OK) NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(database)); } sqlite3_bind_text(addStmt, 1, [i UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_int(addStmt,1,i); // sqlite3_bind_text(addStmt, 1, [coffeeName UTF8String], -1, SQLITE_TRANSIENT); // sqlite3_bind_double(addStmt, 2, [price doubleValue]); if(SQLITE_DONE != sqlite3_step(addStmt)) NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(database)); else //SQLite provides a method to get the last primary key inserted by using sqlite3_last_insert_rowid coffeeID = sqlite3_last_insert_rowid(database); //Reset the add statement. sqlite3_reset(addStmt); // sqlite3_clear_bindings(detailStmt); //} } sqlite3_finalize(addStmt); addStmt = nil; sqlite3_close(database); } -(void)sopinglist { databaseName= @"Recipe1.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(addStmt == nil) { ///////////const char *sql = "insert into Dataa(item) Values(?)"; const char *sql = " insert into Slist(Incredients,Recipename,foodtype ) Values(?,?,?)"; ///////////// const char *sql =" Update Slist ( Incredients, Recipename,foodtype) Values(?,?,?)"; if(sqlite3_prepare_v2(database, sql, -1, &addStmt, NULL) != SQLITE_OK) NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(database)); } /////for( NSString * j in k) sqlite3_bind_text(addStmt, 1, [k UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_int(addStmt,1,i); // sqlite3_bind_text(addStmt, 1, [coffeeName UTF8String], -1, SQLITE_TRANSIENT); // sqlite3_bind_double(addStmt, 2, [price doubleValue]); if(SQLITE_DONE != sqlite3_step(addStmt)) NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(database)); else //SQLite provides a method to get the last primary key inserted by using sqlite3_last_insert_rowid coffeeID = sqlite3_last_insert_rowid(database); //Reset the add statement. sqlite3_reset(addStmt); // sqlite3_clear_bindings(detailStmt); //} } sqlite3_finalize(addStmt); addStmt = nil; sqlite3_close(database); } -(void)Data { ////////////////databaseName = @"dataa.sqlite"; databaseName = @"Recipe1.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(detailStmt == nil) { const char *sql = "Select * from Slist "; if(sqlite3_prepare_v2(database, sql, -1, &detailStmt, NULL) == SQLITE_OK) { //NSLog(@"Hiiiiiii"); //sqlite3_bind_text(detailStmt, 1, [t1 UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_text(detailStmt, 2, [t2 UTF8String], -2, SQLITE_TRANSIENT); //sqlite3_bind_int(detailStmt, 3, t3); while(sqlite3_step(detailStmt) == SQLITE_ROW) { NSLog(@"Helllloooooo"); //NSString *item= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 0)]; NSString *item= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 0)]; char *str=( char*)sqlite3_column_text(detailStmt, 0); if( str) { item = [ NSString stringWithUTF8String:str ]; } else { item= @""; } //+ (NSString*)stringWithCharsIfNotNull: (char*)item /// { // if ( item == NULL ) // return nil; //else // return [[NSString stringWithUTF8String: item] //stringByTrimmingCharactersInSet: [NSCharacterSet whitespaceAndNewlineCharacterSet]]; //} //NSString *fame= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 1)]; //NSString *cinemax = [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 2)]; //NSString *big= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 3)]; //pvr1 = pvr; item1=item; //NSLog(@"%@",item1); data = [[NSMutableArray alloc] init]; list *animal=[[list alloc] initWithName:item1]; // Add the animal object to the animals Array [list1 addObject:animal]; //[list1 addObject:item]; } sqlite3_reset(detailStmt); } sqlite3_finalize(detailStmt); // sqlite3_clear_bindings(detailStmt); } } detailStmt = nil; sqlite3_close(database); } (void)recpies { /////////////////////databaseName = @"Data.sqlite"; databaseName = @"Recipe1.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(detailStmt == nil) { //////const char *sql = "Select * from Dataa"; const char *sql ="select *from alootikki"; if(sqlite3_prepare_v2(database, sql, -1, &detailStmt, NULL) == SQLITE_OK) { //NSLog(@"Hiiiiiii"); //sqlite3_bind_text(detailStmt, 1, [t1 UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_text(detailStmt, 2, [t2 UTF8String], -2, SQLITE_TRANSIENT); //sqlite3_bind_int(detailStmt, 3, t3); while(sqlite3_step(detailStmt) == SQLITE_ROW) { //NSLog(@"Helllloooooo"); //NSString *item= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 0)]; NSString *item= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 0)]; char *str=( char*)sqlite3_column_text(detailStmt, 0); if( str) { item = [ NSString stringWithUTF8String:str ]; } else { item= @""; } //NSString *fame= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 1)]; //NSString *cinemax = [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 2)]; //NSString *big= [NSString stringWithUTF8String:(char *)sqlite3_column_text(detailStmt, 3)]; //pvr1 = pvr; item1=item; //NSLog(@"%@",item1); data = [[NSMutableArray alloc] init]; list *animal=[[list alloc] initWithName:item1]; // Add the animal object to the animals Array [list1 addObject:animal]; //[list1 addObject:item]; } sqlite3_reset(detailStmt); } sqlite3_finalize(detailStmt); // sqlite3_clear_bindings(detailStmt); } } detailStmt = nil; sqlite3_close(database); }

    Read the article

  • Increase efficiency of a loop with jQuery

    - by Pez Cuckow
    I have a game coded in jQuery where bots are moved around the screen. The below code is a loop that runs every 20ms, currently if you have over 15 bots you start to notice the browser lagging (simply because of all the advanced collision detection going on). Is there any way to reduce the lag, can I make it any more efficient? P.s. sorrry for just posting a block of code, I can't see a way to make my point clear enough without! $.playground().registerCallback(function(){ //Movement Loop if(!pause) { for (var i in bots) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd var self = $('#b' + i); var current = bots[i]; if(bots[i][5]==1) { var xspeed = 0, yspeed = 0; if(current[1]==0) { yspeed = -D_SPEED; } else if(current[1]==1) { xspeed = D_SPEED; } else if(current[1]==2) { yspeed = D_SPEED; } else if(current[1]==3) { xspeed = -D_SPEED; } var x = current[2] + xspeed; var y = current[3] + yspeed; var z = current[3] + 120; if(current[2]>0&&x>PLAYGROUND_WIDTH||current[2]<0&&x<-GRID_SIZE|| current[3]>0&&y>PLAYGROUND_HEIGHT||current[3]<0&&y<-GRID_SIZE) { remove_bot(i, self); } else { if(current[7]!=current[1]) { self.setAnimation(colors[current[0]][current[1]]); bots[i][7] = current[1]; } if(self.css({"left": ""+(x)+"px", "top": ""+(y)+"px", "z-index": z})) { bots[i][2] = x; bots[i][3] = y; bots[i][4] = z; bots[i][8]++; } } } } $("#debug").html(dump(arrows)); $(".bot").each(function(){ var b_id = $(this).attr("id").substr(1); var collision = false; var c_bot = bots[b_id]; var b_x = c_bot[2]; var b_y = c_bot[3]; var b_d = c_bot[1]; $(this).collision(".arrow,#arrows").each(function(){ //Many thanks to Selim Arsever for this fix! var a_id = $(this).attr("id").substr(1); var piece = arrows[a_id]; var a_v = piece[0]; if(a_v==1) { var a_x = piece[2]; var a_y = piece[3]; var d_x = b_x-a_x; var d_y = b_y-a_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd bots[b_id][7] = c_bot[1]; bots[b_id][1] = piece[1]; collision = true; } } }); if(!collision) { $(this).collision(".wall,#level").each(function(){ var w_id = $(this).attr("id").substr(1); var piece = pieces[w_id]; var w_x = piece[1]; var w_y = piece[2]; d_x = b_x-w_x; d_y = b_y-w_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=27&&d_y<=28) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-12&&d_x<=-11&&d_y>=21&&d_y<=22) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-9&&d_y<=-8) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=20&&d_y<=21) { kill_bot(b_id); collision = true; } //22 // 21 }); } if(!collision&&c_bot[8]>GRID_MOVE) { $(this).collision(".spawn,#level").each(function(){ var s_id = $(this).attr("id").substr(1); var piece = pieces[s_id]; var s_x = piece[1]; var s_y = piece[2]; d_x = b_x-s_x; d_y = b_y-s_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=19&&d_y<=20) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-14&&d_x<=-13&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-11&&d_y<=-10) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //22 // 21*/ }); } if(!collision) { $(this).collision(".exit,#level").each(function(){ var e_id = $(this).attr("id").substr(1); var piece = pieces[e_id]; var e_x = piece[1]; var e_y = piece[2]; d_x = b_x-e_x; d_y = b_y-e_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { current_bots++; bots[b_id] = false; $("#current_bots").html(current_bots); $("#b" + b_id).setAnimation(exit[2], function(node){$(node).fadeOut(200)}); } }); } if(!collision) { $(this).collision(".bot,#level").each(function(){ var bd_id = $(this).attr("id").substr(1); if(bd_id!=b_id) { var piece = bots[bd_id]; var bd_x = piece[2]; var bd_y = piece[3]; d_x = b_x-bd_x; d_y = b_y-bd_y; if(d_x>=0&&d_x<=2&&d_y>=0&&d_y<=2) { kill_bot(b_id); kill_bot(bd_id); collision = true; } } }); } }); } }, REFRESH_RATE); Many thanks,

    Read the article

  • How to fix - Parse error: syntax error, unexpected T_CLASS

    - by user557015
    Hi, I'm new to PHP, and i'm making a forum. all the files work except one file, add_topic.php. It gives me an error saying: Parse error: syntax error, unexpected T_CLASS in /home/a3885465/public_html/add_topic.php on line 25 I know it is probably on the lines: </span>}<span class="s4"><br> </span>else{<span class="s4"><br> </span>echo "ERROR";<span class="s4"><br> </span>}<span class="s4"><br> </span>mysql_close();<span class="s4"><br> but the whole code is below just in case. If you have any idea, it would be really appreciated, thanks! The Code for add_topic.php <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <meta http-equiv="Content-Style-Type" content="text/css"> <title></title> <meta name="Generator" content="Cocoa HTML Writer"> <meta name="CocoaVersion" content="1038.32"> <style type="text/css"> p.p1 {margin: 0.0px 0.0px 12.0px 0.0px; font: 12.0px Verdana; color: #009901} p.p2 {margin: 0.0px 0.0px 12.0px 0.0px; font: 12.0px Verdana; color: #3a00ff} span.s1 {color: #3a00ff} span.s2 {font: 12.0px 'Lucida Grande'; color: #3a00ff} span.s3 {font: 13.0px Courier; color: #404040} span.s4 {font: 12.0px 'Lucida Grande'} span.s5 {color: #009901} td.td1 {width: 566.0px; margin: 0.5px 0.5px 0.5px 0.5px} </style> </head> <body> <table cellspacing="0" cellpadding="0"> <tbody> <tr> <td valign="middle" class="td1"> <p class="p1"><span class="s1"></span><?php<span class="s2"><br> </span><span class="s1">$host="</span><span class="s3">"host"</span><span class="s1">";</span> // Host name <span class="s4"><br> </span><span class="s1">$username="</span><span class="s3">username</span><span class="s1">";</span> // Mysql username <span class="s4"><br> </span><span class="s1">$password="password";</span> // Mysql password <span class="s4"><br> </span><span class="s1">$db_name="</span><span class="s3">database_name</span><span class="s1">";</span> // Database name <span class="s4"><br> </span><span class="s1">$tbl_name="forum_question";</span> // Table name</p> <p class="p2"><span class="s5">// Connect to server and select database.</span><span class="s4"><br> </span>mysql_connect("$host", "$username", "$password")or die("cannot connect"); <span class="s4"><br> </span>mysql_select_db("$db_name")or die("cannot select DB");</p> <p class="p2"><span class="s5">// get data that sent from form </span><span class="s4"><br> </span>$topic=$_POST['topic'];<span class="s4"><br> </span>$detail=$_POST['detail'];<span class="s4"><br> </span>$name=$_POST['name'];<span class="s4"><br> </span>$email=$_POST['email'];<span class="s4"><br> <br> </span>$datetime=date("d/m/y h:i:s"); <span class="s5">//create date time</span><span class="s4"><br> <br> </span>$sql="INSERT INTO $tbl_name(topic, detail, name, email, datetime)VALUES('$topic', '$detail', '$name', '$email', '$datetime')";<span class="s4"><br> </span>$result=mysql_query($sql);<span class="s4"><br> <br> </span>if($result){<span class="s4"><br> </span>echo "Successful<BR>";<span class="s4"><br> </span>echo "<a href=main_forum.php>View your topic</a>";<span class="s4"><br> </span>}<span class="s4"><br> </span>else{<span class="s4"><br> </span>echo "ERROR";<span class="s4"><br> </span>}<span class="s4"><br> </span>mysql_close();<span class="s4"><br> </span>?></p> </td> </tr> </tbody> </table> </body> </html> Thanks again!

    Read the article

  • How do I implement AABB ray cast hit checking for opengl es on the iPhone

    - by Big Fizzy
    Basically, I draw a 3D cube, I can spin it around but I want to be able to touch it and know where on my cube's surface the user touched. I'm using for setting up, generating and spinning. Its based on the Molecules code and NeHe tutorial #5. Any help, links, tutorials and code would be greatly appreciated. I have lots of development experience but nothing much in the way of openGL and 3d. // // GLViewController.h // NeHe Lesson 05 // // Created by Jeff LaMarche on 12/12/08. // Copyright Jeff LaMarche Consulting 2008. All rights reserved. // #import "GLViewController.h" #import "GLView.h" @implementation GLViewController - (void)drawBox { static const GLfloat cubeVertices[] = { -1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f,-1.0f, 1.0f, -1.0f,-1.0f, 1.0f, -1.0f, 1.0f,-1.0f, 1.0f, 1.0f,-1.0f, 1.0f,-1.0f,-1.0f, -1.0f,-1.0f,-1.0f }; static const GLubyte cubeNumberOfIndices = 36; const GLubyte cubeVertexFaces[] = { 0, 1, 5, // Half of top face 0, 5, 4, // Other half of top face 4, 6, 5, // Half of front face 4, 6, 7, // Other half of front face 0, 1, 2, // Half of back face 0, 3, 2, // Other half of back face 1, 2, 5, // Half of right face 2, 5, 6, // Other half of right face 0, 3, 4, // Half of left face 7, 4, 3, // Other half of left face 3, 6, 2, // Half of bottom face 6, 7, 3, // Other half of bottom face }; const GLubyte cubeFaceColors[] = { 0, 255, 0, 255, 255, 125, 0, 255, 255, 0, 0, 255, 255, 255, 0, 255, 0, 0, 255, 255, 255, 0, 255, 255 }; glEnableClientState(GL_VERTEX_ARRAY); glVertexPointer(3, GL_FLOAT, 0, cubeVertices); int colorIndex = 0; for(int i = 0; i < cubeNumberOfIndices; i += 3) { glColor4ub(cubeFaceColors[colorIndex], cubeFaceColors[colorIndex+1], cubeFaceColors[colorIndex+2], cubeFaceColors[colorIndex+3]); int face = (i / 3.0); if (face%2 != 0.0) colorIndex+=4; glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_BYTE, &cubeVertexFaces[i]); } glDisableClientState(GL_VERTEX_ARRAY); } //move this to a data model later! - (GLfixed)floatToFixed:(GLfloat)aValue; { return (GLfixed) (aValue * 65536.0f); } - (void)drawViewByRotatingAroundX:(float)xRotation rotatingAroundY:(float)yRotation scaling:(float)scaleFactor translationInX:(float)xTranslation translationInY:(float)yTranslation view:(GLView*)view; { glMatrixMode(GL_MODELVIEW); GLfixed currentModelViewMatrix[16] = { 45146, 47441, 2485, 0, -25149, 26775,-54274, 0, -40303, 36435, 36650, 0, 0, 0, 0, 65536 }; /* GLfixed currentModelViewMatrix[16] = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 65536 }; */ //glLoadIdentity(); //glOrthof(-1.0f, 1.0f, -1.5f, 1.5f, -10.0f, 4.0f); // Reset rotation system if (isFirstDrawing) { //glLoadIdentity(); glMultMatrixx(currentModelViewMatrix); [self configureLighting]; isFirstDrawing = NO; } // Scale the view to fit current multitouch scaling GLfixed fixedPointScaleFactor = [self floatToFixed:scaleFactor]; glScalex(fixedPointScaleFactor, fixedPointScaleFactor, fixedPointScaleFactor); // Perform incremental rotation based on current angles in X and Y glGetFixedv(GL_MODELVIEW_MATRIX, currentModelViewMatrix); GLfloat totalRotation = sqrt(xRotation*xRotation + yRotation*yRotation); glRotatex([self floatToFixed:totalRotation], (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[1] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[0]), (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[5] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[4]), (GLfixed)((xRotation/totalRotation) * (GLfloat)currentModelViewMatrix[9] + (yRotation/totalRotation) * (GLfloat)currentModelViewMatrix[8]) ); // Translate the model by the accumulated amount glGetFixedv(GL_MODELVIEW_MATRIX, currentModelViewMatrix); float currentScaleFactor = sqrt(pow((GLfloat)currentModelViewMatrix[0] / 65536.0f, 2.0f) + pow((GLfloat)currentModelViewMatrix[1] / 65536.0f, 2.0f) + pow((GLfloat)currentModelViewMatrix[2] / 65536.0f, 2.0f)); xTranslation = xTranslation / (currentScaleFactor * currentScaleFactor); yTranslation = yTranslation / (currentScaleFactor * currentScaleFactor); // Grab the current model matrix, and use the (0,4,8) components to figure the eye's X axis in the model coordinate system, translate along that glTranslatef(xTranslation * (GLfloat)currentModelViewMatrix[0] / 65536.0f, xTranslation * (GLfloat)currentModelViewMatrix[4] / 65536.0f, xTranslation * (GLfloat)currentModelViewMatrix[8] / 65536.0f); // Grab the current model matrix, and use the (1,5,9) components to figure the eye's Y axis in the model coordinate system, translate along that glTranslatef(yTranslation * (GLfloat)currentModelViewMatrix[1] / 65536.0f, yTranslation * (GLfloat)currentModelViewMatrix[5] / 65536.0f, yTranslation * (GLfloat)currentModelViewMatrix[9] / 65536.0f); // Black background, with depth buffer enabled glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); [self drawBox]; } - (void)configureLighting; { const GLfixed lightAmbient[] = {13107, 13107, 13107, 65535}; const GLfixed lightDiffuse[] = {65535, 65535, 65535, 65535}; const GLfixed matAmbient[] = {65535, 65535, 65535, 65535}; const GLfixed matDiffuse[] = {65535, 65535, 65535, 65535}; const GLfixed lightPosition[] = {30535, -30535, 0, 0}; const GLfixed lightShininess = 20; glEnable(GL_LIGHTING); glEnable(GL_LIGHT0); glEnable(GL_COLOR_MATERIAL); glMaterialxv(GL_FRONT_AND_BACK, GL_AMBIENT, matAmbient); glMaterialxv(GL_FRONT_AND_BACK, GL_DIFFUSE, matDiffuse); glMaterialx(GL_FRONT_AND_BACK, GL_SHININESS, lightShininess); glLightxv(GL_LIGHT0, GL_AMBIENT, lightAmbient); glLightxv(GL_LIGHT0, GL_DIFFUSE, lightDiffuse); glLightxv(GL_LIGHT0, GL_POSITION, lightPosition); glEnable(GL_DEPTH_TEST); glShadeModel(GL_SMOOTH); glEnable(GL_NORMALIZE); } -(void)setupView:(GLView*)view { const GLfloat zNear = 0.1, zFar = 1000.0, fieldOfView = 60.0; GLfloat size; glMatrixMode(GL_PROJECTION); glEnable(GL_DEPTH_TEST); size = zNear * tanf(DEGREES_TO_RADIANS(fieldOfView) / 2.0); CGRect rect = view.bounds; glFrustumf(-size, size, -size / (rect.size.width / rect.size.height), size / (rect.size.width / rect.size.height), zNear, zFar); glViewport(0, 0, rect.size.width, rect.size.height); glScissor(0, 0, rect.size.width, rect.size.height); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glTranslatef(0.0f, 0.0f, -6.0f); isFirstDrawing = YES; } - (void)didReceiveMemoryWarning { [super didReceiveMemoryWarning]; } - (void)dealloc { [super dealloc]; } @end

    Read the article

  • Send email with PHP script -> How to display mutated vowels?

    - by Sebi
    I use this php script to send an email. It works well, but german mutated vowels (ö,ä,ü, etc) are not displayed correctly. Any hints how to change that? <?php /* Geben Sie hier Ihre E-Mail Adresse zwischen den beiden " an: */ $_emails[0] = "[email protected]"; // Wenn keine $_POST Daten übermittelt wurden, dann abbrechen if(!isset($_POST) OR empty($_POST)) { header("Content-type: text/plain"); echo "Es wurden keine Daten übermittelt!"; exit; } else { // Datum, Uhrzeit und Pfad zum eigenen Script feststellen $date = date("d.m.Y"); $time = date("H:i"); $host = "http://" . $_SERVER['HTTP_HOST'] . $_SERVER['PHP_SELF']; // Empfänger feststellen und auf Gültigkeit prüfen if(isset($_POST['recipient']) AND isset($_emails[ $_POST['recipient'] ]) AND preg_match("/^.*@.*\..*$/", $_emails[ $_POST['recipient'] ])) { $recipient = $_emails[ $_POST['recipient'] ]; } // Wurde kein (gültiger) Empfänger angegeben, es mit $_email[0] versuchen elseif(isset($_emails[0]) AND preg_match("/^.*@.*\..*$/", $_emails[0])) { $recipient = $_emails[0]; } // Ist auch diese Adresse ungültig, mit Fehlermeldung abbrechen else { header("Content-type: text/plain"); echo "Fehler im Script - es wurde kein Empfänger oder eine ungültige E-Mail Adresse in \ angegeben."; exit; } // Wenn Betreff übermittelt, diesen verwenden if(isset($_POST['subject'])) { $subject = $_POST['subject']; } // sonst einen Default Betreff verwenden else { $subject = "Formular Daten von {$_SERVER['HTTP_HOST']}"; } // E-Mai Kopf generieren $email = "Formular Eintrag\n" . "\n" . "Am $date um $time Uhr hast das Script auf $host Formulardaten empfangen,\n" . "welche nach Angabe des Browsers von {$_SERVER['HTTP_REFERER']} stammen.\n" . "\n" . "Der Formular Inhalt wird nachfolgend wiedergegeben.\n" . "\n"; // Alle $_POST Werte an den E-Mail Kopf anhängen foreach($_POST as $key => $value) { if($key == "redirect" OR $key == "recipient" OR $key == "subject") { continue; } $email .= "Fomular Feld '$key':\n" . "=============================\n" . "$value\n" . "\n"; } // E-Mail Fuß anfügen $email .= "=============================\n" . "Ende der automatisch generierten E-Mail."; // Versuchen E-Mail zu versenden if(!mail($recipient, $subject, $email)) { // Ist dies gescheitert, Fehlermeldung ausgeben echo "Es ist ein Fehler beim Versenden der E-Mail aufgetreten," . " eventuell liegt ein Konfigurationsfehler am Server vor.\n\n"; exit; } // Wenn gewünscht, auf Bestätigungsseite weiterleiten if(isset($_POST['redirect']) AND preg_match("=^(http|ftp)://.*\..*$=", $_POST['redirect'])) { header("Location: ".$_POST['redirect']); exit; } else { header("Content-type: text/html"); echo "Die E-Mail wurde erfolgreich versendet."; echo '<br>'; echo '<a href="http://www.ovlu.li/cms/index.php?page=kontakt">Zurueck</a>'; exit; } } ?> So i followed the hint in the first answer and the code looks now the following: <?php /* Geben Sie hier Ihre E-Mail Adresse zwischen den beiden " an: */ $_emails[0] = "[email protected]"; // Wenn keine $_POST Daten übermittelt wurden, dann abbrechen if(!isset($_POST) OR empty($_POST)) { header("Content-type: text/plain; charset=utf-8"); echo "Es wurden keine Daten übermittelt!"; exit; } else { // Datum, Uhrzeit und Pfad zum eigenen Script feststellen $date = date("d.m.Y"); $time = date("H:i"); $host = "http://" . $_SERVER['HTTP_HOST'] . $_SERVER['PHP_SELF']; // Empfänger feststellen und auf Gültigkeit prüfen if(isset($_POST['recipient']) AND isset($_emails[ $_POST['recipient'] ]) AND preg_match("/^.*@.*\..*$/", $_emails[ $_POST['recipient'] ])) { $recipient = $_emails[ $_POST['recipient'] ]; } // Wurde kein (gültiger) Empfänger angegeben, es mit $_email[0] versuchen elseif(isset($_emails[0]) AND preg_match("/^.*@.*\..*$/", $_emails[0])) { $recipient = $_emails[0]; } // Ist auch diese Adresse ungültig, mit Fehlermeldung abbrechen else { header("Content-type: text/plain"); echo "Fehler im Script - es wurde kein Empfänger oder eine ungültige E-Mail Adresse in \ angegeben."; exit; } // Wenn Betreff übermittelt, diesen verwenden if(isset($_POST['subject'])) { $subject = $_POST['subject']; } // sonst einen Default Betreff verwenden else { $subject = "Formular Daten von {$_SERVER['HTTP_HOST']}"; } // E-Mai Kopf generieren $email = "Formular Eintrag\n" . "\n" . "Am $date um $time Uhr hast das Script auf $host Formulardaten empfangen,\n" . "welche nach Angabe des Browsers von {$_SERVER['HTTP_REFERER']} stammen.\n" . "\n" . "Der Formular Inhalt wird nachfolgend wiedergegeben.\n" . "\n"; // Alle $_POST Werte an den E-Mail Kopf anhängen foreach($_POST as $key => $value) { if($key == "redirect" OR $key == "recipient" OR $key == "subject") { continue; } $email .= "Fomular Feld '$key':\n" . "=============================\n" . "$value\n" . "\n"; } // E-Mail Fuß anfügen $email .= "=============================\n" . "Ende der automatisch generierten E-Mail."; $email = htmlentities($email, ENT_QUOTES, 'uft-8'); // Versuchen E-Mail zu versenden if(!mail($recipient, $subject, $email)) { // Ist dies gescheitert, Fehlermeldung ausgeben echo "Es ist ein Fehler beim Versenden der E-Mail aufgetreten," . " eventuell liegt ein Konfigurationsfehler am Server vor.\n\n"; exit; } // Wenn gewünscht, auf Bestätigungsseite weiterleiten if(isset($_POST['redirect']) AND preg_match("=^(http|ftp)://.*\..*$=", $_POST['redirect'])) { header("Location: ".$_POST['redirect']); exit; } // sonst eine Bestätigung ausgeben else { header("Content-type: text/html"); echo "Die E-Mail wurde erfolgreich versendet."; echo '<br>'; echo '<a href="http://foto.roser.li/admin/index.php?page=kontakt">Zurueck</a>'; exit; } } ?> Now when I send the email, the following message is displayed: > Warning: htmlentities(): charset > `uft-8' not supported, assuming > iso-8859-1 in > /home/www/web21/html/roser/foto/admin/mail.php > on line 77 Die E-Mail wurde > erfolgreich versendet.

    Read the article

  • Increase efficiency of a loop with jQuery and GameQuery

    - by Pez Cuckow
    I have a game coded in jQuery where bots are moved around the screen. The below code is a loop that runs every 20ms, currently if you have over 15 bots you start to notice the browser lagging (simply because of all the advanced collision detection going on). Is there any way to reduce the lag, can I make it any more efficient? P.s. sorrry for just posting a block of code, I can't see a way to make my point clear enough without! $.playground().registerCallback(function(){ //Movement Loop if(!pause) { for (var i in bots) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd var self = $('#b' + i); var current = bots[i]; if(bots[i][5]==1) { var xspeed = 0, yspeed = 0; if(current[1]==0) { yspeed = -D_SPEED; } else if(current[1]==1) { xspeed = D_SPEED; } else if(current[1]==2) { yspeed = D_SPEED; } else if(current[1]==3) { xspeed = -D_SPEED; } var x = current[2] + xspeed; var y = current[3] + yspeed; var z = current[3] + 120; if(current[2]>0&&x>PLAYGROUND_WIDTH||current[2]<0&&x<-GRID_SIZE|| current[3]>0&&y>PLAYGROUND_HEIGHT||current[3]<0&&y<-GRID_SIZE) { remove_bot(i, self); } else { if(current[7]!=current[1]) { self.setAnimation(colors[current[0]][current[1]]); bots[i][7] = current[1]; } if(self.css({"left": ""+(x)+"px", "top": ""+(y)+"px", "z-index": z})) { bots[i][2] = x; bots[i][3] = y; bots[i][4] = z; bots[i][8]++; } } } } $("#debug").html(dump(arrows)); $(".bot").each(function(){ var b_id = $(this).attr("id").substr(1); var collision = false; var c_bot = bots[b_id]; var b_x = c_bot[2]; var b_y = c_bot[3]; var b_d = c_bot[1]; $(this).collision(".arrow,#arrows").each(function(){ //Many thanks to Selim Arsever for this fix! var a_id = $(this).attr("id").substr(1); var piece = arrows[a_id]; var a_v = piece[0]; if(a_v==1) { var a_x = piece[2]; var a_y = piece[3]; var d_x = b_x-a_x; var d_y = b_y-a_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { //bots - color, dir, x, y, z, spawned?, spawnerid, prevd bots[b_id][7] = c_bot[1]; bots[b_id][1] = piece[1]; collision = true; } } }); if(!collision) { $(this).collision(".wall,#level").each(function(){ var w_id = $(this).attr("id").substr(1); var piece = pieces[w_id]; var w_x = piece[1]; var w_y = piece[2]; d_x = b_x-w_x; d_y = b_y-w_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=27&&d_y<=28) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-12&&d_x<=-11&&d_y>=21&&d_y<=22) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-9&&d_y<=-8) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=20&&d_y<=21) { kill_bot(b_id); collision = true; } //22 // 21 }); } if(!collision&&c_bot[8]>GRID_MOVE) { $(this).collision(".spawn,#level").each(function(){ var s_id = $(this).attr("id").substr(1); var piece = pieces[s_id]; var s_x = piece[1]; var s_y = piece[2]; d_x = b_x-s_x; d_y = b_y-s_y; if(b_d==0&&d_x>=4&&d_x<=5&&d_y>=19&&d_y<=20) { kill_bot(b_id); collision = true; } //4 // 33 if(b_d==1&&d_x>=-14&&d_x<=-13&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //-14 // 21 if(b_d==2&&d_x>=4&&d_x<=5&&d_y>=-11&&d_y<=-10) { kill_bot(b_id); collision = true; } //4 // -9 if(b_d==3&&d_x>=22&&d_x<=23&&d_y>=11&&d_y<=12) { kill_bot(b_id); collision = true; } //22 // 21*/ }); } if(!collision) { $(this).collision(".exit,#level").each(function(){ var e_id = $(this).attr("id").substr(1); var piece = pieces[e_id]; var e_x = piece[1]; var e_y = piece[2]; d_x = b_x-e_x; d_y = b_y-e_y; if(d_x>=4&&d_x<=5&&d_y>=1&&d_y<=2) { current_bots++; bots[b_id] = false; $("#current_bots").html(current_bots); $("#b" + b_id).setAnimation(exit[2], function(node){$(node).fadeOut(200)}); } }); } if(!collision) { $(this).collision(".bot,#level").each(function(){ var bd_id = $(this).attr("id").substr(1); if(bd_id!=b_id) { var piece = bots[bd_id]; var bd_x = piece[2]; var bd_y = piece[3]; d_x = b_x-bd_x; d_y = b_y-bd_y; if(d_x>=0&&d_x<=2&&d_y>=0&&d_y<=2) { kill_bot(b_id); kill_bot(bd_id); collision = true; } } }); } }); } }, REFRESH_RATE); Many thanks,

    Read the article

  • Reading a child process's /proc/pid/mem file from the parent

    - by Amittai Aviram
    In the program below, I am trying to cause the following to happen: Process A assigns a value to a stack variable a. Process A (parent) creates process B (child) with PID child_pid. Process B calls function func1, passing a pointer to a. Process B changes the value of variable a through the pointer. Process B opens its /proc/self/mem file, seeks to the page containing a, and prints the new value of a. Process A (at the same time) opens /proc/child_pid/mem, seeks to the right page, and prints the new value of a. The problem is that, in step 6, the parent only sees the old value of a in /proc/child_pid/mem, while the child can indeed see the new value in its /proc/self/mem. Why is this the case? Is there any way that I can get the parent to to see the child's changes to its address space through the /proc filesystem? #include <fcntl.h> #include <stdbool.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/wait.h> #include <unistd.h> #define PAGE_SIZE 0x1000 #define LOG_PAGE_SIZE 0xc #define PAGE_ROUND_DOWN(v) ((v) & (~(PAGE_SIZE - 1))) #define PAGE_ROUND_UP(v) (((v) + PAGE_SIZE - 1) & (~(PAGE_SIZE - 1))) #define OFFSET_IN_PAGE(v) ((v) & (PAGE_SIZE - 1)) # if defined ARCH && ARCH == 32 #define BP "ebp" #define SP "esp" #else #define BP "rbp" #define SP "rsp" #endif typedef struct arg_t { int a; } arg_t; void func1(void * data) { arg_t * arg_ptr = (arg_t *)data; printf("func1: old value: %d\n", arg_ptr->a); arg_ptr->a = 53; printf("func1: address: %p\n", &arg_ptr->a); printf("func1: new value: %d\n", arg_ptr->a); } void expore_proc_mem(void (*fn)(void *), void * data) { off_t frame_pointer, stack_start; char buffer[PAGE_SIZE]; const char * path = "/proc/self/mem"; int child_pid, status; int parent_to_child[2]; int child_to_parent[2]; arg_t * arg_ptr; off_t child_offset; asm volatile ("mov %%"BP", %0" : "=m" (frame_pointer)); stack_start = PAGE_ROUND_DOWN(frame_pointer); printf("Stack_start: %lx\n", (unsigned long)stack_start); arg_ptr = (arg_t *)data; child_offset = OFFSET_IN_PAGE((off_t)&arg_ptr->a); printf("Address of arg_ptr->a: %p\n", &arg_ptr->a); pipe(parent_to_child); pipe(child_to_parent); bool msg; int child_mem_fd; char child_path[0x20]; child_pid = fork(); if (child_pid == -1) { perror("fork"); exit(EXIT_FAILURE); } if (!child_pid) { close(child_to_parent[0]); close(parent_to_child[1]); printf("CHILD (pid %d, parent pid %d).\n", getpid(), getppid()); fn(data); msg = true; write(child_to_parent[1], &msg, 1); child_mem_fd = open("/proc/self/mem", O_RDONLY); if (child_mem_fd == -1) { perror("open (child)"); exit(EXIT_FAILURE); } printf("CHILD: child_mem_fd: %d\n", child_mem_fd); if (lseek(child_mem_fd, stack_start, SEEK_SET) == (off_t)-1) { perror("lseek"); exit(EXIT_FAILURE); } if (read(child_mem_fd, buffer, sizeof(buffer)) != sizeof(buffer)) { perror("read"); exit(EXIT_FAILURE); } printf("CHILD: new value %d\n", *(int *)(buffer + child_offset)); read(parent_to_child[0], &msg, 1); exit(EXIT_SUCCESS); } else { printf("PARENT (pid %d, child pid %d)\n", getpid(), child_pid); printf("PARENT: child_offset: %lx\n", child_offset); read(child_to_parent[0], &msg, 1); printf("PARENT: message from child: %d\n", msg); snprintf(child_path, 0x20, "/proc/%d/mem", child_pid); printf("PARENT: child_path: %s\n", child_path); child_mem_fd = open(path, O_RDONLY); if (child_mem_fd == -1) { perror("open (child)"); exit(EXIT_FAILURE); } printf("PARENT: child_mem_fd: %d\n", child_mem_fd); if (lseek(child_mem_fd, stack_start, SEEK_SET) == (off_t)-1) { perror("lseek"); exit(EXIT_FAILURE); } if (read(child_mem_fd, buffer, sizeof(buffer)) != sizeof(buffer)) { perror("read"); exit(EXIT_FAILURE); } printf("PARENT: new value %d\n", *(int *)(buffer + child_offset)); close(child_mem_fd); printf("ENDING CHILD PROCESS.\n"); write(parent_to_child[1], &msg, 1); if (waitpid(child_pid, &status, 0) == -1) { perror("waitpid"); exit(EXIT_FAILURE); } } } int main(void) { arg_t arg; arg.a = 42; printf("In main: address of arg.a: %p\n", &arg.a); explore_proc_mem(&func1, &arg.a); return EXIT_SUCCESS; } This program produces the output below. Notice that the value of a (boldfaced) differs between parent's and child's reading of the /proc/child_pid/mem file. In main: address of arg.a: 0x7ffffe1964f0 Stack_start: 7ffffe196000 Address of arg_ptr-a: 0x7ffffe1964f0 PARENT (pid 20376, child pid 20377) PARENT: child_offset: 4f0 CHILD (pid 20377, parent pid 20376). func1: old value: 42 func1: address: 0x7ffffe1964f0 func1: new value: 53 PARENT: message from child: 1 CHILD: child_mem_fd: 4 PARENT: child_path: /proc/20377/mem CHILD: new value 53 PARENT: child_mem_fd: 7 PARENT: new value 42 ENDING CHILD PROCESS.

    Read the article

  • Display database content which is last inserted using last inserted ID

    - by user2330772
    What i am doing is i am displaying last inserted data when form data is submitted,the form is multipart/form-data. I am getting this form data using jquery,here i am sending this data to php file using Ajax POST.In that php file i am inserting that data in db table..where i am getting the id of inserted data..on success of Ajax call i am sending that id to another PHP file..where using that id i am displaying the last inserted data... My form is: <form method="post" enctype="multipart/form-data" name="upload_form" id="data"> <select id="sel"> <option>Select the Project Stream</option> <option value="1">Computer Science</option> <option value="2">Mechanical</option> <option value="3">IT</option> <option value="4">Web Development</option> <option value="5">MCA</option> <option value="6">Civil</option> </select><br /> <input type="text" id="title" placeholder="Project Title"/><br /> <input type="text" id="vurl" placeholder="If You have any video about project write your video url path here" style="width:435px;"/><br /> <textarea id="prjdesc" name="prjdesc" rows="20" cols="80" style="border-style:groove;box-shadow: 10px 10px 10px 10px #888888;"placeholder="Please describe Your Project"></textarea> <label for="file">Filename:</label> <input type="file" name="file" id="file"/><br /> <button>Submit</button> </form> My js file: $("form#data").submit(function() { alert("update"); var sid=$("#sel").val(); alert(sid); var ttle = $("#title").val(); alert(ttle); var text = $("#prjdesc").val(); var vurl = $("#vurl").val(); /*var dataString = 'param='+text+'&param1='+vurl+'&param2='+ttle+'&param3='+id;*/ var formData = new FormData($(this)[0]); formData.append('param',text); formData.append('param1',vurl); formData.append('param2',ttle ); formData.append('param3',sid ); $.ajax({ type:'POST', data:formData, url:'insert.php', success:function(id) { alert(id); window.location ="another.php?id="+id; }, cache: false, contentType: false, processData: false }); return false; }); insert.php: <?php print_r($_FILES); $desc = $_POST['param']; echo $desc; $video = $_POST['param1']; echo $video ; $title = $_POST['param2']; echo $title; $tech_id=$_POST['param3']; echo $tech_id; $host="localhost"; $username="root"; $password=""; $db_name="geny"; $tbl_name="project_details"; mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); $allowedExts = array("gif", "jpeg", "jpg", "png"); $extension = end(explode(".", $_FILES["file"]["name"])); $url_dir = "C:/wamp/www/WebsiteTemplate4/upload/"; if ((($_FILES["file"]["type"] == "image/gif") || ($_FILES["file"]["type"] == "image/jpeg") || ($_FILES["file"]["type"] == "image/jpg") || ($_FILES["file"]["type"] == "image/pjpeg") || ($_FILES["file"]["type"] == "image/x-png") || ($_FILES["file"]["type"] == "image/png")) && ($_FILES["file"]["size"] < 50000) && in_array($extension, $allowedExts)) { if ($_FILES["file"]["error"] > 0) { echo "Return Code: " . $_FILES["file"]["error"] . "<br>"; } else { if (file_exists($url_dir . $_FILES["file"]["name"])) { echo $_FILES["file"]["name"] . " already exists. "; } else { move_uploaded_file($_FILES["file"]["tmp_name"],$url_dir. $_FILES["file"]["name"]); // echo "Stored in: " . "upload/" . $_FILES["file"]["name"]; $tmp = "C:/wamp/www/WebsiteTemplate4/upload/" . $_FILES["file"]["name"]; $sql="INSERT INTO $tbl_name (title, content, img_path, video_url, project_tech_Id) VALUES ('$title','$desc','$tmp','$video','$tech_id')"; if(mysql_query($sql)) { echo mysql_insert_id(); } else { echo "Cannot Insert"; } } } } else { echo "Invalid file"; } ?> another.php: <?php $temp=$_GET['id']; echo $temp; $host="localhost"; $username="root"; $password=""; $db_name="geny"; $tbl_name="project_details"; mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); $query = mysql_query("SELECT content FROM project_details WHERE id=". $temp); if (!$query) { echo 'Could not run query: ' . mysql_error(); exit; } $row = mysql_fetch_row($query); echo "<div id='uprjct' style='background:#336699;'> <p>$row[0]</p> </div>"; ?> but the ID what i am returning in insert.php containg array of elements...i dont want all these thing i want only ID(which is number).. please tell me what is wrong in my code...

    Read the article

  • Web Browser Control &ndash; Specifying the IE Version

    - by Rick Strahl
    I use the Internet Explorer Web Browser Control in a lot of my applications to display document type layout. HTML happens to be one of the most common document formats and displaying data in this format – even in desktop applications, is often way easier than using normal desktop technologies. One issue the Web Browser Control has that it’s perpetually stuck in IE 7 rendering mode by default. Even though IE 8 and now 9 have significantly upgraded the IE rendering engine to be more CSS and HTML compliant by default the Web Browser control will have none of it. IE 9 in particular – with its much improved CSS support and basic HTML 5 support is a big improvement and even though the IE control uses some of IE’s internal rendering technology it’s still stuck in the old IE 7 rendering by default. This applies whether you’re using the Web Browser control in a WPF application, a WinForms app, a FoxPro or VB classic application using the ActiveX control. Behind the scenes all these UI platforms use the COM interfaces and so you’re stuck by those same rules. Rendering Challenged To see what I’m talking about here are two screen shots rendering an HTML 5 doctype page that includes some CSS 3 functionality – rounded corners and border shadows - from an earlier post. One uses IE 9 as a standalone browser, and one uses a simple WPF form that includes the Web Browser control. IE 9 Browser:   Web Browser control in a WPF form: The IE 9 page displays this HTML correctly – you see the rounded corners and shadow displayed. Obviously the latter rendering using the Web Browser control in a WPF application is a bit lacking. Not only are the new CSS features missing but the page also renders in Internet Explorer’s quirks mode so all the margins, padding etc. behave differently by default, even though there’s a CSS reset applied on this page. If you’re building an application that intends to use the Web Browser control for a live preview of some HTML this is clearly undesirable. Feature Delegation via Registry Hacks Fortunately starting with Internet Explore 8 and later there’s a fix for this problem via a registry setting. You can specify a registry key to specify which rendering mode and version of IE should be used by that application. These are not global mind you – they have to be enabled for each application individually. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe The value to set this key to is (taken from MSDN here) as decimal values: 9999 (0x270F) Internet Explorer 9. Webpages are displayed in IE9 Standards mode, regardless of the !DOCTYPE directive. 9000 (0x2328) Internet Explorer 9. Webpages containing standards-based !DOCTYPE directives are displayed in IE9 mode. 8888 (0x22B8) Webpages are displayed in IE8 Standards mode, regardless of the !DOCTYPE directive. 8000 (0x1F40) Webpages containing standards-based !DOCTYPE directives are displayed in IE8 mode. 7000 (0x1B58) Webpages containing standards-based !DOCTYPE directives are displayed in IE7 Standards mode.   The added key looks something like this in the Registry Editor: With this in place my Html Html Help Builder application which has wwhelp.exe as its main executable now works with HTML 5 and CSS 3 documents in the same way that Internet Explorer 9 does. Incidentally I accidentally added an ‘empty’ DWORD value of 0 to my EXE name and that worked as well giving me IE 9 rendering. Although not documented I suspect 0 (or an invalid value) will default to the installed browser. Don’t have a good way to test this but if somebody could try this with IE 8 installed that would be great: What happens when setting 9000 with IE 8 installed? What happens when setting 0 with IE 8 installed? Don’t forget to add Keys for Host Environments If you’re developing your application in Visual Studio and you run the debugger you may find that your application is still not rendering right, but if you run the actual generated EXE from Explorer or the OS command prompt it works. That’s because when you run the debugger in Visual Studio it wraps your application into a debugging host container. For this reason you might want to also add another registry key for yourapp.vshost.exe on your development machine. If you’re developing in Visual FoxPro make sure you add a key for vfp9.exe to see the rendering adjustments in the Visual FoxPro development environment. Cleaner HTML - no more HTML mangling! There are a number of additional benefits to setting up rendering of the Web Browser control to the IE 9 engine (or even the IE 8 engine) beyond the obvious rendering functionality. IE 9 actually returns your HTML in something that resembles the original HTML formatting, as opposed to the IE 7 default format which mangled the original HTML content. If you do the following in the WPF application: private void button2_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; MessageBox.Show(doc.body.outerHtml); } you get different output depending on the rendering mode active. With the default IE 7 rendering you get: <BODY><DIV> <H1>Rounded Corners and Shadows - Creating Dialogs in CSS</H1> <DIV class=toolbarcontainer><A class=hoverbutton href="./"><IMG src="../../css/images/home.gif"> Home</A> <A class=hoverbutton href="RoundedCornersAndShadows.htm"><IMG src="../../css/images/refresh.gif"> Refresh</A> </DIV> <DIV class=containercontent> <FIELDSET><LEGEND>Plain Box</LEGEND><!-- Simple Box with rounded corners and shadow --> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Box with Header</LEGEND> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV class="gridheaderleft roundbox-top">Box with a Header</DIV> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox-bottom">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Dialog Style Window</LEGEND> <DIV style="POSITION: relative; WIDTH: 450px" id=divDialog class="dialog boxshadow" jQuery16107208195684204002="2"> <DIV style="POSITION: relative" class=dialog-header> <DIV class=closebox></DIV>User Sign-in <DIV class=closebox jQuery16107208195684204002="3"></DIV></DIV> <DIV class=descriptionheader>This dialog is draggable and closable</DIV> <DIV class=dialog-content><LABEL>Username:</LABEL> <INPUT name=txtUsername value=" "> <LABEL>Password</LABEL> <INPUT name=txtPassword value=" "> <HR> <INPUT id=btnLogin value=Login type=button> </DIV> <DIV class=dialog-statusbar>Ready</DIV></DIV></FIELDSET> </DIV> <SCRIPT type=text/javascript>     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </SCRIPT> </DIV></BODY> Now lest you think I’m out of my mind and create complete whacky HTML rooted in the last century, here’s the IE 9 rendering mode output which looks a heck of a lot cleaner and a lot closer to my original HTML of the page I’m accessing: <body> <div>         <h1>Rounded Corners and Shadows - Creating Dialogs in CSS</h1>     <div class="toolbarcontainer">         <a class="hoverbutton" href="./"> <img src="../../css/images/home.gif"> Home</a>         <a class="hoverbutton" href="RoundedCornersAndShadows.htm"> <img src="../../css/images/refresh.gif"> Refresh</a>     </div>         <div class="containercontent">     <fieldset>         <legend>Plain Box</legend>                <!-- Simple Box with rounded corners and shadow -->             <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                              <div style="background: khaki;" class="boxcontenttext roundbox">                     Simple Rounded Corner Box.                 </div>             </div>     </fieldset>     <fieldset>         <legend>Box with Header</legend>         <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                          <div class="gridheaderleft roundbox-top">Box with a Header</div>             <div style="background: khaki;" class="boxcontenttext roundbox-bottom">                 Simple Rounded Corner Box.             </div>         </div>     </fieldset>       <fieldset>         <legend>Dialog Style Window</legend>         <div style="width: 450px; position: relative;" id="divDialog" class="dialog boxshadow">             <div style="position: relative;" class="dialog-header">                 <div class="closebox"></div>                 User Sign-in             <div class="closebox"></div></div>             <div class="descriptionheader">This dialog is draggable and closable</div>                    <div class="dialog-content">                             <label>Username:</label>                 <input name="txtUsername" value=" " type="text">                 <label>Password</label>                 <input name="txtPassword" value=" " type="text">                                 <hr/>                                 <input id="btnLogin" value="Login" type="button">                        </div>             <div class="dialog-statusbar">Ready</div>         </div>     </fieldset>     </div> <script type="text/javascript">     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </script>        </div> </body> IOW, in IE9 rendering mode IE9 is much closer (but not identical) to the original HTML from the page on the Web that we’re reading from. As a side note: Unfortunately, the browser feature emulation can't be applied against the Html Help (CHM) Engine in Windows which uses the Web Browser control (or COM interfaces anyway) to render Html Help content. I tried setting up hh.exe which is the help viewer, to use IE 9 rendering but a help file generated with CSS3 features will simply show in IE 7 mode. Bummer - this would have been a nice quick fix to allow help content served from CHM files to look better. HTML Editing leaves HTML formatting intact In the same vane, if you do any inline HTML editing in the control by setting content to be editable, IE 9’s control does a much more reasonable job of creating usable and somewhat valid HTML. It also leaves the original content alone other than the text your are editing or adding. No longer is the HTML output stripped of excess spaces and reformatted in IEs format. So if I do: private void button3_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; doc.body.contentEditable = true; } and then make some changes to the document by typing into it using IE 9 mode, the document formatting stays intact and only the affected content is modified. The created HTML is reasonably clean (although it does lack proper XHTML formatting for things like <br/> <hr/>). This is very different from IE 7 mode which mangled the HTML as soon as the page was loaded into the control. Any editing you did stripped out all white space and lost all of your existing XHTML formatting. In IE 9 mode at least *most* of your original formatting stays intact. This is huge! In Html Help Builder I have supported HTML editing for a long time but the HTML mangling by the Web Browser control made it very difficult to edit the HTML later. Previously IE would mangle the HTML by stripping out spaces, upper casing all tags and converting many XHTML safe tags to its HTML 3 tags. Now IE leaves most of my document alone while editing, and creates cleaner and more compliant markup (with exception of self-closing elements like BR/HR). The end result is that I now have HTML editing in place that's much cleaner and actually capable of being manually edited. Caveats, Caveats, Caveats It wouldn't be Internet Explorer if there weren't some major compatibility issues involved in using this various browser version interaction. The biggest thing I ran into is that there are odd differences in some of the COM interfaces and what they return. I specifically ran into a problem with the document.selection.createRange() function which with IE 7 compatibility returns an expected text range object. When running in IE 8 or IE 9 mode however. I could not retrieve a valid text range with this code where loEdit is the WebBrowser control: loRange = loEdit.document.selection.CreateRange() The loRange object returned (here in FoxPro) had a length property of 0 but none of the other properties of the TextRange or TextRangeCollection objects were available. I figured this was due to some changed security settings but even after elevating the Intranet Security Zone and mucking with the other browser feature flags pertaining to security I had no luck. In the end I relented and used a JavaScript function in my editor document that returns a selection range object: function getselectionrange() { var range = document.selection.createRange(); return range; } and call that JavaScript function from my host applications code: *** Use a function in the document to get around HTML Editing issues loRange = loEdit.document.parentWindow.getselectionrange(.f.) and that does work correctly. This wasn't a big deal as I'm already loading a support script file into the editor page so all I had to do is add the function to this existing script file. You can find out more how to call script code in the Web Browser control from a host application in a previous post of mine. IE 8 and 9 also clamp down the security environment a little more than the default IE 7 control, so there may be other issues you run into. Other than the createRange() problem above I haven't seen anything else that is breaking in my code so far though and that's encouraging at least since it uses a lot of HTML document manipulation for the custom editor I've created (and would love to replace - any PROFESSIONAL alternatives anybody?) Registry Key Installation for your Application It’s important to remember that this registry setting is made per application, so most likely this is something you want to set up with your installer. Also remember that 32 and 64 bit settings require separate settings in the registry so if you’re creating your installer you most likely will want to set both keys in the registry preemptively for your application. I use Tarma Installer for all of my application installs and in Tarma I configure registry keys for both and set a flag to only install the latter key group in the 64 bit version: Because this setting is application specific you have to do this for every application you install unfortunately, but this also means that you can safely configure this setting in the registry because it is after only applied to your application. Another problem with install based installation is version detection. If IE 8 is installed I’d want 8000 for the value, if IE 9 is installed I want 9000. I can do this easily in code but in the installer this is much more difficult. I don’t have a good solution for this at the moment, but given that the app works with IE 7 mode now, IE 9 mode is just a bonus for the moment. If IE 9 is not installed and 9000 is used the default rendering will remain in use.   It sure would be nice if we could specify the IE rendering mode as a property, but I suspect the ActiveX container has to know before it loads what actual version to load up and once loaded can only load a single version of IE. This would account for this annoying application level configuration… Summary The registry feature emulation has been available for quite some time, but I just found out about it today and started experimenting around with it. I’m stoked to see that this is available as I’d pretty much given up in ever seeing any better rendering in the Web Browser control. Now at least my apps can take advantage of newer HTML features. Now if we could only get better HTML Editing support somehow <snicker>… ah can’t have everything.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  Windows  

    Read the article

  • Automating Solaris 11 Zones Installation Using The Automated Install Server

    - by Orgad Kimchi
    Introduction How to use the Oracle Solaris 11 Automated install server in order to automate the Solaris 11 Zones installation. In this document I will demonstrate how to setup the Automated Install server in order to provide hands off installation process for the Global Zone and two Non Global Zones located on the same system. Architecture layout: Figure 1. Architecture layout Prerequisite Setup the Automated install server (AI) using the following instructions “How to Set Up Automated Installation Services for Oracle Solaris 11” The first step in this setup will be creating two Solaris 11 Zones configuration files. Step 1: Create the Solaris 11 Zones configuration files  The Solaris Zones configuration files should be in the format of the zonecfg export command. # zonecfg -z zone1 export > /var/tmp/zone1# cat /var/tmp/zone1 create -b set brand=solaris set zonepath=/rpool/zones/zone1 set autoboot=true set ip-type=exclusive add anet set linkname=net0 set lower-link=auto set configure-allowed-address=true set link-protection=mac-nospoof set mac-address=random end  Create a backup copy of this file under a different name, for example, zone2. # cp /var/tmp/zone1 /var/tmp/zone2 Modify the second configuration file with the zone2 configuration information You should change the zonepath for example: set zonepath=/rpool/zones/zone2 Step2: Copy and share the Zones configuration files  Create the NFS directory for the Zones configuration files # mkdir /export/zone_config Share the directory for the Zones configuration file # share –o ro /export/zone_config Copy the Zones configuration files into the NFS shared directory # cp /var/tmp/zone1 /var/tmp/zone2  /export/zone_config Verify that the NFS share has been created using the following command # share export_zone_config      /export/zone_config     nfs     sec=sys,ro Step 3: Add the Global Zone as client to the Install Service Use the installadm create-client command to associate client (Global Zone) with the install service To find the MAC address of a system, use the dladm command as described in the dladm(1M) man page. The following command adds the client (Global Zone) with MAC address 0:14:4f:2:a:19 to the s11x86service install service. # installadm create-client -e “0:14:4f:2:a:19" -n s11x86service You can verify the client creation using the following command # installadm list –c Service Name  Client Address     Arch   Image Path ------------  --------------     ----   ---------- s11x86service 00:14:4F:02:0A:19  i386   /export/auto_install/s11x86service We can see the client install service name (s11x86service), MAC address (00:14:4F:02:0A:19 and Architecture (i386). Step 4: Global Zone manifest setup  First, get a list of the installation services and the manifests associated with them: # installadm list -m Service Name   Manifest        Status ------------   --------        ------ default-i386   orig_default   Default s11x86service  orig_default   Default Then probe the s11x86service and the default manifest associated with it. The -m switch reflects the name of the manifest associated with a service. Since we want to capture that output into a file, we redirect the output of the command as follows: # installadm export -n s11x86service -m orig_default >  /var/tmp/orig_default.xml Create a backup copy of this file under a different name, for example, orig-default2.xml, and edit the copy. # cp /var/tmp/orig_default.xml /var/tmp/orig_default2.xml Use the configuration element in the AI manifest for the client system to specify non-global zones. Use the name attribute of the configuration element to specify the name of the zone. Use the source attribute to specify the location of the config file for the zone.The source location can be any http:// or file:// location that the client can access during installation. The following sample AI manifest specifies two Non-Global Zones: zone1 and zone2 You should replace the server_ip with the ip address of the NFS server. <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>   <ai_instance>     <target>       <logical>         <zpool name="rpool" is_root="true">           <filesystem name="export" mountpoint="/export"/>           <filesystem name="export/home"/>           <be name="solaris"/>         </zpool>       </logical>     </target>     <software type="IPS">       <source>         <publisher name="solaris">           <origin name="http://pkg.oracle.com/solaris/release"/>         </publisher>       </source>       <software_data action="install">         <name>pkg:/entire@latest</name>         <name>pkg:/group/system/solaris-large-server</name>       </software_data>     </software>     <configuration type="zone" name="zone1" source="file:///net/server_ip/export/zone_config/zone1"/>     <configuration type="zone" name="zone2" source="file:///net/server_ip/export/zone_config/zone2"/>   </ai_instance> </auto_install> The following example adds the /var/tmp/orig_default2.xml AI manifest to the s11x86service install service # installadm create-manifest -n s11x86service -f /var/tmp/orig_default2.xml -m gzmanifest You can verify the manifest creation using the following command # installadm list -n s11x86service  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    orig_default        Default  None    gzmanifest          Inactive None We can see from the command output that the new manifest named gzmanifest has been created and associated with the s11x86service install service. Step 5: Non Global Zone manifest setup The AI manifest for non-global zone installation is similar to the AI manifest for installing the global zone. If you do not provide a custom AI manifest for a non-global zone, the default AI manifest for Zones is used The default AI manifest for Zones is available at /usr/share/auto_install/manifest/zone_default.xml. In this example we should use the default AI manifest for zones The following sample default AI manifest for zones # cat /usr/share/auto_install/manifest/zone_default.xml <?xml version="1.0" encoding="UTF-8"?> <!--  Copyright (c) 2011, 2012, Oracle and/or its affiliates. All rights reserved. --> <!DOCTYPE auto_install SYSTEM "file:///usr/share/install/ai.dtd.1"> <auto_install>     <ai_instance name="zone_default">         <target>             <logical>                 <zpool name="rpool">                     <!--                       Subsequent <filesystem> entries instruct an installer                       to create following ZFS datasets:                           <root_pool>/export         (mounted on /export)                           <root_pool>/export/home    (mounted on /export/home)                       Those datasets are part of standard environment                       and should be always created.                       In rare cases, if there is a need to deploy a zone                       without these datasets, either comment out or remove                       <filesystem> entries. In such scenario, it has to be also                       assured that in case of non-interactive post-install                       configuration, creation of initial user account is                       disabled in related system configuration profile.                       Otherwise the installed zone would fail to boot.                     -->                     <filesystem name="export" mountpoint="/export"/>                     <filesystem name="export/home"/>                     <be name="solaris">                         <options>                             <option name="compression" value="on"/>                         </options>                     </be>                 </zpool>             </logical>         </target>         <software type="IPS">             <destination>                              </destination>             <software_data action="install">                 <name>pkg:/group/system/solaris-small-server</name>             </software_data>         </software>     </ai_instance> </auto_install> (optional) We can customize the default AI manifest for Zones Create a backup copy of this file under a different name, for example, zone_default2.xml and edit the copy # cp /usr/share/auto_install/manifest/zone_default.xml /var/tmp/zone_default2.xml Edit the copy (/var/tmp/zone_default2.xml) The following example adds the /var/tmp/zone_default2.xml AI manifest to the s11x86service install service and specifies that zone1 and zone2 should use this manifest. # installadm create-manifest -n s11x86service -f /var/tmp/zone_default2.xml -m zones_manifest -c zonename="zone1 zone2" Note: Do not use the following elements or attributes in a non-global zone AI manifest:     The auto_reboot attribute of the ai_instance element     The http_proxy attribute of the ai_instance element     The disk child element of the target element     The noswap attribute of the logical element     The nodump attribute of the logical element     The configuration element Step 6: Global Zone profile setup We are going to create a global zone configuration profile which includes the host information for example: host name, ip address name services etc… # sysconfig create-profile –o /var/tmp/gz_profile.xml You need to provide the host information for example:     Default router     Root password     DNS information The output should eventually disappear and be replaced by the initial screen of the System Configuration Tool (see Figure 2), where you can do the final configuration. Figure 2. Profile creation menu You can validate the profile using the following command # installadm validate -n s11x86service –P /var/tmp/gz_profile.xml Validating static profile gz_profile.xml...  Passed Next, instantiate a profile with the install service. In our case, use the following syntax for doing this # installadm create-profile -n s11x86service  -f /var/tmp/gz_profile.xml -p  gz_profile You can verify profile creation using the following command # installadm list –n s11x86service  -p Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         None We can see that the gz_profie has been created and associated with the s11x86service Install service. Step 7: Setup the Solaris Zones configuration profiles The step should be similar to the Global zone profile creation on step 6 # sysconfig create-profile –o /var/tmp/zone1_profile.xml # sysconfig create-profile –o /var/tmp/zone2_profile.xml You can validate the profiles using the following command # installadm validate -n s11x86service -P /var/tmp/zone1_profile.xml Validating static profile zone1_profile.xml...  Passed # installadm validate -n s11x86service -P /var/tmp/zone2_profile.xml Validating static profile zone2_profile.xml...  Passed Next, associate the profiles with the install service The following example adds the zone1_profile.xml configuration profile to the s11x86service  install service and specifies that zone1 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone1_profile.xml -p zone1_profile -c zonename=zone1 The following example adds the zone2_profile.xml configuration profile to the s11x86service  install service and specifies that zone2 should use this profile. # installadm create-profile -n s11x86service  -f  /var/tmp/zone2_profile.xml -p zone2_profile -c zonename=zone2 You can verify the profiles creation using the following command # installadm list -n s11x86service -p Service/Profile Name  Criteria --------------------  -------- s11x86service    zone1_profile      zonename = zone1    zone2_profile      zonename = zone2    gz_profile         None We can see that we have three profiles in the s11x86service  install service     Global Zone  gz_profile     zone1            zone1_profile     zone2            zone2_profile. Step 8: Global Zone setup Associate the global zone client with the manifest and the profile that we create in the previous steps The following example adds the manifest and profile to the client (global zone), where: gzmanifest  is the name of the manifest. gz_profile  is the name of the configuration profile. mac="0:14:4f:2:a:19" is the client (global zone) mac address s11x86service is the install service name. # installadm set-criteria -m  gzmanifest  –p  gz_profile  -c mac="0:14:4f:2:a:19" -n s11x86service You can verify the manifest and profile association using the following command # installadm list -n s11x86service -p  -m Service/Manifest Name  Status   Criteria ---------------------  ------   -------- s11x86service    gzmanifest                   mac  = 00:14:4F:02:0A:19    orig_default        Default  None Service/Profile Name  Criteria --------------------  -------- s11x86service    gz_profile         mac      = 00:14:4F:02:0A:19    zone2_profile      zonename = zone2    zone1_profile      zonename = zone1 Step 9: Provision the host with the Non-Global Zones The next step is to boot the client system off the network and provision it using the Automated Install service that we just set up. First, boot the client system. Figure 3 shows the network boot attempt (when done on an x86 system): Figure 3. Network Boot Then you will be prompted by a GRUB menu, with a timer, as shown in Figure 4. The default selection (the "Text Installer and command line" option) is highlighted.  Press the down arrow to highlight the second option labeled Automated Install, and then press Enter. The reason we need to do this is because we want to prevent a system from being automatically re-installed if it were to be booted from the network accidentally. Figure 4. GRUB Menu What follows is the continuation of a networked boot from the Automated Install server,. The client downloads a mini-root (a small set of files in which to successfully run the installer), identifies the location of the Automated Install manifest on the network, retrieves that manifest, and then processes it to identify the address of the IPS repository from which to obtain the desired software payload. Non-Global Zones are installed and configured on the first reboot after the Global Zone is installed. You can list all the Solaris Zones status using the following command # zoneadm list -civ Once the Zones are in running state you can login into the Zone using the following command # zlogin –z zone1 Troubleshooting Automated Installations If an installation to a client system failed, you can find the client log at /system/volatile/install_log. NOTE: Zones are not installed if any of the following errors occurs:     A zone config file is not syntactically correct.     A collision exists among zone names, zone paths, or delegated ZFS datasets in the set of zones to be installed     Required datasets are not configured in the global zone. For more troubleshooting information see “Installing Oracle Solaris 11 Systems” Conclusion This paper demonstrated the benefits of using the Automated Install server to simplify the Non Global Zones setup, including the creation and configuration of the global zone manifest and the Solaris Zones profiles.

    Read the article

  • Node.js Adventure - Storage Services and Service Runtime

    - by Shaun
    When I described on how to host a Node.js application on Windows Azure, one of questions might be raised about how to consume the vary Windows Azure services, such as the storage, service bus, access control, etc.. Interact with windows azure services is available in Node.js through the Windows Azure Node.js SDK, which is a module available in NPM. In this post I would like to describe on how to use Windows Azure Storage (a.k.a. WAS) as well as the service runtime.   Consume Windows Azure Storage Let’s firstly have a look on how to consume WAS through Node.js. As we know in the previous post we can host Node.js application on Windows Azure Web Site (a.k.a. WAWS) as well as Windows Azure Cloud Service (a.k.a. WACS). In theory, WAWS is also built on top of WACS worker roles with some more features. Hence in this post I will only demonstrate for hosting in WACS worker role. The Node.js code can be used when consuming WAS when hosted on WAWS. But since there’s no roles in WAWS, the code for consuming service runtime mentioned in the next section cannot be used for WAWS node application. We can use the solution that I created in my last post. Alternatively we can create a new windows azure project in Visual Studio with a worker role, add the “node.exe” and “index.js” and install “express” and “node-sqlserver” modules, make all files as “Copy always”. In order to use windows azure services we need to have Windows Azure Node.js SDK, as knows as a module named “azure” which can be installed through NPM. Once we downloaded and installed, we need to include them in our worker role project and make them as “Copy always”. You can use my “Copy all always” tool mentioned in my last post to update the currently worker role project file. You can also find the source code of this tool here. The source code of Windows Azure SDK for Node.js can be found in its GitHub page. It contains two parts. One is a CLI tool which provides a cross platform command line package for Mac and Linux to manage WAWS and Windows Azure Virtual Machines (a.k.a. WAVM). The other is a library for managing and consuming vary windows azure services includes tables, blobs, queues, service bus and the service runtime. I will not cover all of them but will only demonstrate on how to use tables and service runtime information in this post. You can find the full document of this SDK here. Back to Visual Studio and open the “index.js”, let’s continue our application from the last post, which was working against Windows Azure SQL Database (a.k.a. WASD). The code should looks like this. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Now let’s create a new function, copy the records from WASD to table service. 1. Delete the table named “resource”. 2. Create a new table named “resource”. These 2 steps ensures that we have an empty table. 3. Load all records from the “resource” table in WASD. 4. For each records loaded from WASD, insert them into the table one by one. 5. Prompt to user when finished. In order to use table service we need the storage account and key, which can be found from the developer portal. Just select the storage account and click the Manage Keys button. Then create two local variants in our Node.js application for the storage account name and key. Since we need to use WAS we need to import the azure module. Also I created another variant stored the table name. In order to work with table service I need to create the storage client for table service. This is very similar as the Windows Azure SDK for .NET. As the code below I created a new variant named “client” and use “createTableService”, specified my storage account name and key. 1: var azure = require("azure"); 2: var storageAccountName = "synctile"; 3: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 4: var tableName = "resource"; 5: var client = azure.createTableService(storageAccountName, storageAccountKey); Now create a new function for URL “/was/init” so that we can trigger it through browser. Then in this function we will firstly load all records from WASD. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: } 18: } 19: }); 20: } 21: }); 22: }); When we succeed loaded all records we can start to transform them into table service. First I need to recreate the table in table service. This can be done by deleting and creating the table through table client I had just created previously. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: } 27: }); 28: }); 29: } 30: } 31: }); 32: } 33: }); 34: }); As you can see, the azure SDK provide its methods in callback pattern. In fact, almost all modules in Node.js use the callback pattern. For example, when I deleted a table I invoked “deleteTable” method, provided the name of the table and a callback function which will be performed when the table had been deleted or failed. Underlying, the azure module will perform the table deletion operation in POSIX async threads pool asynchronously. And once it’s done the callback function will be performed. This is the reason we need to nest the table creation code inside the deletion function. If we perform the table creation code after the deletion code then they will be invoked in parallel. Next, for each records in WASD I created an entity and then insert into the table service. Finally I send the response to the browser. Can you find a bug in the code below? I will describe it later in this post. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: // transform the records 26: for (var i = 0; i < results.rows.length; i++) { 27: var entity = { 28: "PartitionKey": results.rows[i][1], 29: "RowKey": results.rows[i][0], 30: "Value": results.rows[i][2] 31: }; 32: client.insertEntity(tableName, entity, function (error) { 33: if (error) { 34: error["target"] = "insertEntity"; 35: res.send(500, error); 36: } 37: else { 38: console.log("entity inserted"); 39: } 40: }); 41: } 42: // send the 43: console.log("all done"); 44: res.send(200, "All done!"); 45: } 46: }); 47: }); 48: } 49: } 50: }); 51: } 52: }); 53: }); Now we can publish it to the cloud and have a try. But normally we’d better test it at the local emulator first. In Node.js SDK there are three build-in properties which provides the account name, key and host address for local storage emulator. We can use them to initialize our table service client. We also need to change the SQL connection string to let it use my local database. The code will be changed as below. 1: // windows azure sql database 2: //var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:ac6271ya9e.database.windows.net,1433;Database=synctile;Uid=shaunxu@ac6271ya9e;Pwd=eszqu94XZY;Encrypt=yes;Connection Timeout=30;"; 3: // sql server 4: var connectionString = "Driver={SQL Server Native Client 11.0};Server={.};Database={Caspar};Trusted_Connection={Yes};"; 5:  6: var azure = require("azure"); 7: var storageAccountName = "synctile"; 8: var storageAccountKey = "/cOy9L7xysXOgPYU9FjDvjrRAhaMX/5tnOpcjqloPNDJYucbgTy7MOrAW7CbUg6PjaDdmyl+6pkwUnKETsPVNw=="; 9: var tableName = "resource"; 10: // windows azure storage 11: //var client = azure.createTableService(storageAccountName, storageAccountKey); 12: // local storage emulator 13: var client = azure.createTableService(azure.ServiceClient.DEVSTORE_STORAGE_ACCOUNT, azure.ServiceClient.DEVSTORE_STORAGE_ACCESS_KEY, azure.ServiceClient.DEVSTORE_TABLE_HOST); Now let’s run the application and navigate to “localhost:12345/was/init” as I hosted it on port 12345. We can find it transformed the data from my local database to local table service. Everything looks fine. But there is a bug in my code. If we have a look on the Node.js command window we will find that it sent response before all records had been inserted, which is not what I expected. The reason is that, as I mentioned before, Node.js perform all IO operations in non-blocking model. When we inserted the records we executed the table service insert method in parallel, and the operation of sending response was also executed in parallel, even though I wrote it at the end of my logic. The correct logic should be, when all entities had been copied to table service with no error, then I will send response to the browser, otherwise I should send error message to the browser. To do so I need to import another module named “async”, which helps us to coordinate our asynchronous code. Install the module and import it at the beginning of the code. Then we can use its “forEach” method for the asynchronous code of inserting table entities. The first argument of “forEach” is the array that will be performed. The second argument is the operation for each items in the array. And the third argument will be invoked then all items had been performed or any errors occurred. Here we can send our response to browser. 1: app.get("/was/init", function (req, res) { 2: // load all records from windows azure sql database 3: sql.open(connectionString, function (err, conn) { 4: if (err) { 5: console.log(err); 6: res.send(500, "Cannot open connection."); 7: } 8: else { 9: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 10: if (err) { 11: console.log(err); 12: res.send(500, "Cannot retrieve records."); 13: } 14: else { 15: if (results.rows.length > 0) { 16: // begin to transform the records into table service 17: // recreate the table named 'resource' 18: client.deleteTable(tableName, function (error) { 19: client.createTableIfNotExists(tableName, function (error) { 20: if (error) { 21: error["target"] = "createTableIfNotExists"; 22: res.send(500, error); 23: } 24: else { 25: async.forEach(results.rows, 26: // transform the records 27: function (row, callback) { 28: var entity = { 29: "PartitionKey": row[1], 30: "RowKey": row[0], 31: "Value": row[2] 32: }; 33: client.insertEntity(tableName, entity, function (error) { 34: if (error) { 35: callback(error); 36: } 37: else { 38: console.log("entity inserted."); 39: callback(null); 40: } 41: }); 42: }, 43: // send reponse 44: function (error) { 45: if (error) { 46: error["target"] = "insertEntity"; 47: res.send(500, error); 48: } 49: else { 50: console.log("all done"); 51: res.send(200, "All done!"); 52: } 53: } 54: ); 55: } 56: }); 57: }); 58: } 59: } 60: }); 61: } 62: }); 63: }); Run it locally and now we can find the response was sent after all entities had been inserted. Query entities against table service is simple as well. Just use the “queryEntity” method from the table service client and providing the partition key and row key. We can also provide a complex query criteria as well, for example the code here. In the code below I queried an entity by the partition key and row key, and return the proper localization value in response. 1: app.get("/was/:key/:culture", function (req, res) { 2: var key = req.params.key; 3: var culture = req.params.culture; 4: client.queryEntity(tableName, culture, key, function (error, entity) { 5: if (error) { 6: res.send(500, error); 7: } 8: else { 9: res.json(entity); 10: } 11: }); 12: }); And then tested it on local emulator. Finally if we want to publish this application to the cloud we should change the database connection string and storage account. For more information about how to consume blob and queue service, as well as the service bus please refer to the MSDN page.   Consume Service Runtime As I mentioned above, before we published our application to the cloud we need to change the connection string and account information in our code. But if you had played with WACS you should have known that the service runtime provides the ability to retrieve configuration settings, endpoints and local resource information at runtime. Which means we can have these values defined in CSCFG and CSDEF files and then the runtime should be able to retrieve the proper values. For example we can add some role settings though the property window of the role, specify the connection string and storage account for cloud and local. And the can also use the endpoint which defined in role environment to our Node.js application. In Node.js SDK we can get an object from “azure.RoleEnvironment”, which provides the functionalities to retrieve the configuration settings and endpoints, etc.. In the code below I defined the connection string variants and then use the SDK to retrieve and initialize the table client. 1: var connectionString = ""; 2: var storageAccountName = ""; 3: var storageAccountKey = ""; 4: var tableName = ""; 5: var client; 6:  7: azure.RoleEnvironment.getConfigurationSettings(function (error, settings) { 8: if (error) { 9: console.log("ERROR: getConfigurationSettings"); 10: console.log(JSON.stringify(error)); 11: } 12: else { 13: console.log(JSON.stringify(settings)); 14: connectionString = settings["SqlConnectionString"]; 15: storageAccountName = settings["StorageAccountName"]; 16: storageAccountKey = settings["StorageAccountKey"]; 17: tableName = settings["TableName"]; 18:  19: console.log("connectionString = %s", connectionString); 20: console.log("storageAccountName = %s", storageAccountName); 21: console.log("storageAccountKey = %s", storageAccountKey); 22: console.log("tableName = %s", tableName); 23:  24: client = azure.createTableService(storageAccountName, storageAccountKey); 25: } 26: }); In this way we don’t need to amend the code for the configurations between local and cloud environment since the service runtime will take care of it. At the end of the code we will listen the application on the port retrieved from SDK as well. 1: azure.RoleEnvironment.getCurrentRoleInstance(function (error, instance) { 2: if (error) { 3: console.log("ERROR: getCurrentRoleInstance"); 4: console.log(JSON.stringify(error)); 5: } 6: else { 7: console.log(JSON.stringify(instance)); 8: if (instance["endpoints"] && instance["endpoints"]["nodejs"]) { 9: var endpoint = instance["endpoints"]["nodejs"]; 10: app.listen(endpoint["port"]); 11: } 12: else { 13: app.listen(8080); 14: } 15: } 16: }); But if we tested the application right now we will find that it cannot retrieve any values from service runtime. This is because by default, the entry point of this role was defined to the worker role class. In windows azure environment the service runtime will open a named pipeline to the entry point instance, so that it can connect to the runtime and retrieve values. But in this case, since the entry point was worker role and the Node.js was opened inside the role, the named pipeline was established between our worker role class and service runtime, so our Node.js application cannot use it. To fix this problem we need to open the CSDEF file under the azure project, add a new element named Runtime. Then add an element named EntryPoint which specify the Node.js command line. So that the Node.js application will have the connection to service runtime, then it’s able to read the configurations. Start the Node.js at local emulator we can find it retrieved the connections, storage account for local. And if we publish our application to azure then it works with WASD and storage service through the configurations for cloud.   Summary In this post I demonstrated how to use Windows Azure SDK for Node.js to interact with storage service, especially the table service. I also demonstrated on how to use WACS service runtime, how to retrieve the configuration settings and the endpoint information. And in order to make the service runtime available to my Node.js application I need to create an entry point element in CSDEF file and set “node.exe” as the entry point. I used five posts to introduce and demonstrate on how to run a Node.js application on Windows platform, how to use Windows Azure Web Site and Windows Azure Cloud Service worker role to host our Node.js application. I also described how to work with other services provided by Windows Azure platform through Windows Azure SDK for Node.js. Node.js is a very new and young network application platform. But since it’s very simple and easy to learn and deploy, as well as, it utilizes single thread non-blocking IO model, Node.js became more and more popular on web application and web service development especially for those IO sensitive projects. And as Node.js is very good at scaling-out, it’s more useful on cloud computing platform. Use Node.js on Windows platform is new, too. The modules for SQL database and Windows Azure SDK are still under development and enhancement. It doesn’t support SQL parameter in “node-sqlserver”. It does support using storage connection string to create the storage client in “azure”. But Microsoft is working on make them easier to use, working on add more features and functionalities.   PS, you can download the source code here. You can download the source code of my “Copy all always” tool here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • debian lenny xen bridge networking problem

    - by Sasha
    DomU isn't talking to the world, but it talks to Dom0. Here are the tests that I made: Dom0 (external networking is working): ping 188.40.96.238 #Which is Domu's ip PING 188.40.96.238 (188.40.96.238) 56(84) bytes of data. 64 bytes from 188.40.96.238: icmp_seq=1 ttl=64 time=0.092 ms DomU: ping 188.40.96.215 #Which is Dom0's ip PING 188.40.96.215 (188.40.96.215) 56(84) bytes of data. 64 bytes from 188.40.96.215: icmp_seq=1 ttl=64 time=0.045 ms ping 188.40.96.193 #Which is the gateway - fail PING 188.40.96.193 (188.40.96.193) 56(84) bytes of data. ^C --- 188.40.96.193 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1013ms The system is debian lenny with a normal setup. Here is my configs: uname -a Linux green0 2.6.26-2-xen-686 #1 SMP Wed Aug 19 08:47:57 UTC 2009 i686 GNU/Linux cat /etc/xen/green1.cfg |grep -v '#' kernel = '/boot/vmlinuz-2.6.26-2-xen-686' ramdisk = '/boot/initrd.img-2.6.26-2-xen-686' memory = '2000' root = '/dev/xvda2 ro' disk = [ 'file:/home/xen/domains/green1/swap.img,xvda1,w', 'file:/home/xen/domains/green1/disk.img,xvda2,w', ] name = 'green1' vif = [ 'ip=188.40.96.238,mac=00:16:3E:1F:C4:CC' ] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' ifconfig eth0 Link encap:Ethernet HWaddr 00:24:21:ef:2f:86 inet addr:188.40.96.215 Bcast:188.40.96.255 Mask:255.255.255.192 inet6 addr: fe80::224:21ff:feef:2f86/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:3296 errors:0 dropped:0 overruns:0 frame:0 TX packets:2204 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:262717 (256.5 KiB) TX bytes:330465 (322.7 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) peth0 Link encap:Ethernet HWaddr 00:24:21:ef:2f:86 inet6 addr: fe80::224:21ff:feef:2f86/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:3407 errors:0 dropped:657431448 overruns:0 frame:0 TX packets:2291 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:319941 (312.4 KiB) TX bytes:338423 (330.4 KiB) Interrupt:16 Base address:0x8000 vif2.0 Link encap:Ethernet HWaddr fe:ff:ff:ff:ff:ff inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link UP BROADCAST RUNNING PROMISC MULTICAST MTU:1500 Metric:1 RX packets:27 errors:0 dropped:0 overruns:0 frame:0 TX packets:151 errors:0 dropped:33 overruns:0 carrier:0 collisions:0 txqueuelen:32 RX bytes:1164 (1.1 KiB) TX bytes:20974 (20.4 KiB) ip a s 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: peth0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000 link/ether 00:24:21:ef:2f:86 brd ff:ff:ff:ff:ff:ff inet6 fe80::224:21ff:feef:2f86/64 scope link valid_lft forever preferred_lft forever 4: vif0.0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 5: veth0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 6: vif0.1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 7: veth1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 8: vif0.2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 9: veth2: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 10: vif0.3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff 11: veth3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 12: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:24:21:ef:2f:86 brd ff:ff:ff:ff:ff:ff inet 188.40.96.215/26 brd 188.40.96.255 scope global eth0 inet6 fe80::224:21ff:feef:2f86/64 scope link valid_lft forever preferred_lft forever 14: vif2.0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 32 link/ether fe:ff:ff:ff:ff:ff brd ff:ff:ff:ff:ff:ff inet6 fe80::fcff:ffff:feff:ffff/64 scope link valid_lft forever preferred_lft forever brctl show bridge name bridge id STP enabled interfaces eth0 8000.002421ef2f86 no peth0 vif2.0 ip r l Dom0: 188.40.96.192/26 dev eth0 proto kernel scope link src 188.40.96.215 default via 188.40.96.193 dev eth0 DomU: 188.40.96.192/26 dev eth0 proto kernel scope link src 188.40.96.238 default via 188.40.96.193 dev eth0

    Read the article

  • LDAP query on linux against AD returns groups with no members

    - by SethG
    I am using LDAP+kerberos to authenticate against Active Directory on Windows 2003 R2. My krb5.conf and ldap.conf appear to be correct (according to pretty much every sample I found on the 'net). I can login to the host with both password and ssh keys. When I run getent passwd, all my ldap user accounts are listed with all the important attributes. When I run getent group, all the ldap groups and their gid's are listed, but no group members. If I run ldapsearch and filter on any group, the members are all listed with the "member" attribute. So the data is there for the taking, it's just not being parsed properly. It would appear that I simply am using an incorrect mapping in ldap.conf, but I can't see it. I've tried several variations and all give the same result. Here is my current ldap.conf: host <ad-host1-ip> <ad-host2-ip> base dc=my,dc=full,dc=dn uri ldap://<ad-host1> ldap://<ad-host2> ldap_version 3 binddn <mybinddn> bindpw <mybindpw> scope sub bind_policy hard nss_reconnect_tries 3 nss_reconnect_sleeptime 1 nss_reconnect_maxsleeptime 8 nss_reconnect_maxconntries 3 nss_map_objectclass posixAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute gidNumber msSFU30GidNumber nss_map_attribute uidNumber msSFU30UidNumber nss_map_attribute cn cn nss_map_attribute gecos displayName nss_map_attribute homeDirectory msSFU30HomeDirectory nss_map_attribute loginShell msSFU30LoginShell nss_map_attribute uniqueMember member pam_filter objectcategory=User pam_login_attribute sAMAccountName pam_member_attribute member pam_password ad Here's the kicker: this config works 100% fine on a different linux box with a different distro. It does not work on the distro I am planning on switching to. I have installed from source the versions of pam_ldap and nss_ldap on the new box to match the old box, which fixed another problem I was having with this setup. Other relevant info is the original AD box was Windows 2003. It's mirror died a horrible hardware death so I'm trying to add two more 2003-R2 servers to the mirror tree and ultimately drop the old 2003 box. The new R2 boxes appear to have joined the DC forest properly. What do I need to do to get groups working? I've exhausted all the resources I could find and need a different angle. Any input is appreciated. Status update, 7/31/09 I have managed to tweak my config file to get full info from the AD and performance is nice and snappy. I replaced the back-rev'd copies of pam_ldap and nss_ldap with the current ones for the distro I'm using, so it's back to a standard out-of-the-box install. Here's my current config: host <ad-host1-ip> <ad-host2-ip> base dc=my,dc=full,dc=dn uri ldap://<ad-host1> ldap://<ad-host2> ldap_version 3 binddn <mybinddn> bindpw <mybindpw> scope sub bind_policy soft nss_reconnect_tries 3 nss_reconnect_sleeptime 1 nss_reconnect_maxsleeptime 8 nss_reconnect_maxconntries 3 nss_connect_policy oneshot referrals no nss_map_objectclass posixAccount User nss_map_objectclass posixGroup Group nss_map_attribute uid sAMAccountName nss_map_attribute gidNumber msSFU30GidNumber nss_map_attribute uidNumber msSFU30UidNumber nss_map_attribute cn cn nss_map_attribute gecos displayName nss_map_attribute homeDirectory msSFU30HomeDirectory nss_map_attribute loginShell msSFU30LoginShell nss_map_attribute uniqueMember member pam_filter objectcategory=CN=Person,CN=Schema,CN=Configuration,DC=w2k,DC=cis,DC=ksu,DC=edu pam_login_attribute sAMAccountName pam_member_attribute member pam_password ad ssl off tls_checkpeer no sasl_secprops maxssf=0 The remaining problem now is when you run the groups command, not all subscribed groups are listed. Some are (one or two), but not all. Group memberships are still honored, such as file and printer access. getent group foo still shows that the user is a member of group foo. So it appears to be a presentation bug, and does not interfere with normal operation. It also appears that some (I have not determined exactly how many) group searches do not resolve correctly, even though the group is listed. eg, when you run "getent group bar", nothing is returned, but if you run "getent group|grep bar" or "getent group|grep <bar_gid>" you can see that it indeed listed and your group name and gid are correct. This still seems like an LDAP search or mapping error, but I can't figure out what it is. I'm a heckuva lot closer than earlier in the week, but I'd really like to get this last detail ironed out.

    Read the article

  • Can't Remove Logical Drive/Array from HP P400

    - by Myles
    This is my first post here. Thank you in advance for any assistance with this matter. I'm trying to remove a logical drive (logical drive 2) and an array (array "B") from my Smart Array P400. The host is a DL580 G5 running 64-bit Red Hat Enterprise Linux Server release 5.7 (Tikanga). I am unable to remove the array using either hpacucli or cpqacuxe. I believe it is because of "OS Status: LOCKED". The file system that lives on this array has been unmounted. I do not want to reboot the host. Is there some way to "release" this logical drive so I can remove the array? Note that I do not need to preserve the data on logical drive 2. I intend to physically remove the drives from the machine and replace them with larger drives. I'm using the cciss kernel module that ships with Red Hat 5.7. Here is some information pertaining to the host and the P400 configuration: [root@gort ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release 5.7 (Tikanga) [root@gort ~]# uname -a Linux gort 2.6.18-274.el5 #1 SMP Fri Jul 8 17:36:59 EDT 2011 x86_64 x86_64 x86_64 GNU/Linux [root@gort ~]# rpm -qa | egrep '^(hp|cpq)' cpqacuxe-9.30-15.0 hp-health-9.25-1551.7.rhel5 hpsmh-7.1.2-3 hpdiags-9.3.0-466 hponcfg-3.1.0-0 hp-snmp-agents-9.25-2384.8.rhel5 hpacucli-9.30-15.0 [root@gort ~]# hpacucli HP Array Configuration Utility CLI 9.30.15.0 Detecting Controllers...Done. Type "help" for a list of supported commands. Type "exit" to close the console. => ctrl all show config detail Smart Array P400 in Slot 0 (Embedded) Bus Interface: PCI Slot: 0 Cache Serial Number: PA82C0J9SVW34U RAID 6 (ADG) Status: Enabled Controller Status: OK Hardware Revision: D Firmware Version: 7.22 Rebuild Priority: Medium Expand Priority: Medium Surface Scan Delay: 15 secs Surface Scan Mode: Idle Wait for Cache Room: Disabled Surface Analysis Inconsistency Notification: Disabled Post Prompt Timeout: 0 secs Cache Board Present: True Cache Status: OK Cache Ratio: 25% Read / 75% Write Drive Write Cache: Disabled Total Cache Size: 256 MB Total Cache Memory Available: 208 MB No-Battery Write Cache: Disabled Cache Backup Power Source: Batteries Battery/Capacitor Count: 1 Battery/Capacitor Status: OK SATA NCQ Supported: True Logical Drive: 1 Size: 136.7 GB Fault Tolerance: RAID 1 Heads: 255 Sectors Per Track: 32 Cylinders: 35132 Strip Size: 128 KB Full Stripe Size: 128 KB Status: OK Caching: Enabled Unique Identifier: 600508B100184A395356573334550002 Disk Name: /dev/cciss/c0d0 Mount Points: /boot 101 MB, /tmp 7.8 GB, /usr 3.9 GB, /usr/local 2.0 GB, /var 3.9 GB, / 2.0 GB, /local 113.2 GB OS Status: LOCKED Logical Drive Label: A0027AA78DEE Mirror Group 0: physicaldrive 1I:1:2 (port 1I:box 1:bay 2, SAS, 146 GB, OK) Mirror Group 1: physicaldrive 1I:1:1 (port 1I:box 1:bay 1, SAS, 146 GB, OK) Drive Type: Data Array: A Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:1 Port: 1I Box: 1 Bay: 1 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM57RF40000983878FX Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 35 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 1I:1:2 Port: 1I Box: 1 Bay: 2 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM55VQC000098388524 Model: HP DG146BB976 Current Temperature (C): 29 Maximum Temperature (C): 36 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown Logical Drive: 2 Size: 546.8 GB Fault Tolerance: RAID 5 Heads: 255 Sectors Per Track: 32 Cylinders: 65535 Strip Size: 64 KB Full Stripe Size: 256 KB Status: OK Caching: Enabled Parity Initialization Status: Initialization Completed Unique Identifier: 600508B100184A395356573334550003 Disk Name: /dev/cciss/c0d1 Mount Points: None OS Status: LOCKED Logical Drive Label: A5C9C6F81504 Drive Type: Data Array: B Interface Type: SAS Unused Space: 0 MB Status: OK Array Type: Data physicaldrive 1I:1:3 Port: 1I Box: 1 Bay: 3 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2H5PE00009802NK19 Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 37 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 1I:1:4 Port: 1I Box: 1 Bay: 4 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM28YY400009750MKPJ Model: HP DG146ABAB4 Current Temperature (C): 31 Maximum Temperature (C): 36 PHY Count: 1 PHY Transfer Rate: 3.0Gbps physicaldrive 2I:1:5 Port: 2I Box: 1 Bay: 5 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FGYV00009802N3GN Model: HP DG146ABAB4 Current Temperature (C): 30 Maximum Temperature (C): 38 PHY Count: 1 PHY Transfer Rate: Unknown physicaldrive 2I:1:6 Port: 2I Box: 1 Bay: 6 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM8AFAK00009920MMV1 Model: HP DG146BB976 Current Temperature (C): 31 Maximum Temperature (C): 41 PHY Count: 2 PHY Transfer Rate: Unknown, Unknown physicaldrive 2I:1:7 Port: 2I Box: 1 Bay: 7 Status: OK Drive Type: Data Drive Interface Type: SAS Size: 146 GB Rotational Speed: 10000 Firmware Revision: HPDE Serial Number: 3NM2FJQD00009801MSHQ Model: HP DG146ABAB4 Current Temperature (C): 29 Maximum Temperature (C): 39 PHY Count: 1 PHY Transfer Rate: Unknown

    Read the article

  • Delay of mail delivery - Hosted exchange provider

    - by alex
    Hi, I recently signed up to a new hosted email provider. When I send mail (from OWA, OR Outlook) there is a delay of up to 3 minutes from when i send the message, to when it's received (in my gmail account for example) I've listed the headers below. Is there anything I can advise my new email host to do? My previous email host delivers within 5 seconds!! New email provider: Delivered-To: ****.*****@******.co.uk.test-google-a.com Received: by 10.223.120.148 with SMTP id d20cs333125far; Mon, 30 Nov 2009 08:49:43 -0800 (PST) Received: by 10.213.106.202 with SMTP id y10mr4864870ebo.35.1259599782838; Mon, 30 Nov 2009 08:49:42 -0800 (PST) Return-Path: Received: from relay005.apm-internet.net (relay005.apm-internet.net [85.119.248.8]) by mx.google.com with SMTP id 26si13016480ewy.43.2009.11.30.08.49.42; Mon, 30 Nov 2009 08:49:42 -0800 (PST) Received-SPF: neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*******.com) client-ip=85.119.248.8; Authentication-Results: mx.google.com; spf=neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*******.com) smtp.mail=****@*******.com Received: (qmail 63915 invoked from network); 30 Nov 2009 16:49:41 -0000 Received: from unknown (HELO mx-out-manc2.simplymailsolutions.com) (88.151.129.22) by relay005.apm-internet.net with SMTP; 30 Nov 2009 16:49:42 -0000 X-APM-IP: 88.151.129.22 X-APM-Score: 4 Received-SPF: none (relay005.apm-internet.net: domain at alexjamesbrown.com does not designate permitted sender hosts) Received: from [10.1.20.1] (helo=win-s-manc1.shared.ifeltd.com) by mx-out-manc2.simplymailsolutions.com with esmtp (Exim 4.63) (envelope-from ) id 1NF9QZ-0005By-Hw for ****.*****@******.co.uk; Mon, 30 Nov 2009 16:48:46 +0000 Received: from sha-exch8.shared.ifeltd.com ([10.1.20.8]) by win-s-manc1.shared.ifeltd.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 30 Nov 2009 16:48:34 +0000 Received: from sha-exch9.shared.ifeltd.com ([10.1.20.9]) by sha-exch8.shared.ifeltd.com with Microsoft SMTPSVC(6.0.3790.3959); Mon, 30 Nov 2009 16:48:34 +0000 Received: from SHA-EXCH13.shared.ifeltd.com (10.1.20.13) by sha-exch9.shared.ifeltd.com (10.1.20.9) with Microsoft SMTP Server (TLS) id 8.1.393.1; Mon, 30 Nov 2009 16:48:25 +0000 Received: from SHA-EXCH12.shared.ifeltd.com ([fe80::ecba:36d0:eec5:c928]) by SHA-EXCH13.shared.ifeltd.com ([fe80::212b:916c:70c7:a4e5%11]) with mapi; Mon, 30 Nov 2009 16:48:05 +0000 From: Alex Brown To: "****.*****@*****.co.uk" Date: Mon, 30 Nov 2009 16:48:04 +0000 Subject: testing Thread-Topic: testing Thread-Index: AQHKcdzZg4oiDsOYIEio/7k6bCk8BQ== Message-ID: Accept-Language: en-US, en-GB Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US, en-GB Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-OriginalArrivalTime: 30 Nov 2009 16:48:34.0235 (UTC) FILETIME=[F48178B0:01CA71DC] Here are the headers using my previous exchange host: Delivered-To: ****.*****@******.co.uk.test-google-a.com Received: by 10.223.120.148 with SMTP id d20cs333076far; Mon, 30 Nov 2009 08:48:35 -0800 (PST) Received: by 10.213.2.70 with SMTP id 6mr4797985ebi.25.1259599715739; Mon, 30 Nov 2009 08:48:35 -0800 (PST) Return-Path: Received: from relay005.apm-internet.net (relay005.apm-internet.net [85.119.248.8]) by mx.google.com with SMTP id 26si13030993ewy.23.2009.11.30.08.48.35; Mon, 30 Nov 2009 08:48:35 -0800 (PST) Received-SPF: neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*********.com) client-ip=85.119.248.8; Authentication-Results: mx.google.com; spf=neutral (google.com: 85.119.248.8 is neither permitted nor denied by best guess record for domain of ****@*********.com) smtp.mail=****@*********.com Received: (qmail 60920 invoked from network); 30 Nov 2009 16:48:34 -0000 Received: from unknown (HELO MTAb.MsExchange2007.com) (89.31.236.50) by relay005.apm-internet.net with SMTP; 30 Nov 2009 16:48:35 -0000 X-APM-IP: 89.31.236.50 X-APM-Score: 1 Received-SPF: none (relay005.apm-internet.net: domain at alexjamesbrown.com does not designate permitted sender hosts) Received: from EXHUB02.SL.local (no.ptr.hostlogic.biz [89.31.236.28]) by MTAb.MsExchange2007.com (Spam Firewall) with ESMTP id B677A34FE0F for ; Mon, 30 Nov 2009 16:48:33 +0000 (GMT) Received: from EXHUB02.SL.local (no.ptr.hostlogic.biz [89.31.236.28]) by MTAb.MsExchange2007.com with ESMTP id 8X5B8V4tExVzoNyU for ; Mon, 30 Nov 2009 16:48:34 +0000 (GMT) Received: from EXCCR03STORE.SL.local ([10.0.0.2]) by EXHUB02.SL.local ([192.168.92.64]) with mapi; Mon, 30 Nov 2009 16:48:31 +0000 From: Alex James Brown To: "****.*****@******.co.uk" Date: Mon, 30 Nov 2009 16:48:30 +0000 Subject: testing from o Thread-Topic: testing from o Thread-Index: AQHKcdzyY1iBFWiol0ykG6xPQUZiTg== Message-ID: Accept-Language: en-US, en-GB Content-Language: en-GB X-MS-Has-Attach: X-MS-TNEF-Correlator: acceptlanguage: en-US, en-GB Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0

    Read the article

  • DHCP reply packets do not make it into KVM instance in OpenStack

    - by Lorin Hochstein
    I'm running a KVM instance inside of OpenStack, and it isn't getting an IP address from the DHCP server. Using tcpdump, I can see the request and reply packets on vnet0 of the compute host: # tcpdump -i vnet0 -n port 67 or port 68 tcpdump: WARNING: vnet0: no IPv4 address assigned tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vnet0, link-type EN10MB (Ethernet), capture size 65535 bytes 19:44:56.176727 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:44:56.176785 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:44:56.177315 IP 10.40.0.1.67 > 10.40.0.3.68: BOOTP/DHCP, Reply, length 319 19:45:02.179834 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:45:02.179904 IP 0.0.0.0.68 > 255.255.255.255.67: BOOTP/DHCP, Request from fa:16:3e:46:f6:11, length 300 19:45:02.180375 IP 10.40.0.1.67 > 10.40.0.3.68: BOOTP/DHCP, Reply, length 319 However, if I do the same thing on eth0 inside the KVM instance, I only see the request packets, not the reply packets. What would prevent the packets from making it from vnet0 of the host to eth0 of the guest? My host is running Ubuntu 12.04 and my guest is running CentOS 6.3. Note that I have added this rule in my iptables, but it doesn't resolve the issue: -A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill The instance corresponds to vnet0 and is connected via br100: # brctl show bridge name bridge id STP enabled interfaces br100 8000.54781a8605f2 no eth1 vnet0 vnet1 virbr0 8000.000000000000 yes Here's the full iptables-save: # Generated by iptables-save v1.4.12 on Tue Apr 2 19:47:27 2013 *nat :PREROUTING ACCEPT [8323:2553683] :INPUT ACCEPT [7993:2494942] :OUTPUT ACCEPT [6158:461050] :POSTROUTING ACCEPT [6455:511595] :nova-compute-OUTPUT - [0:0] :nova-compute-POSTROUTING - [0:0] :nova-compute-PREROUTING - [0:0] :nova-compute-float-snat - [0:0] :nova-compute-snat - [0:0] :nova-postrouting-bottom - [0:0] -A PREROUTING -j nova-compute-PREROUTING -A OUTPUT -j nova-compute-OUTPUT -A POSTROUTING -j nova-compute-POSTROUTING -A POSTROUTING -j nova-postrouting-bottom -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p tcp -j MASQUERADE --to-ports 1024-65535 -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -p udp -j MASQUERADE --to-ports 1024-65535 -A POSTROUTING -s 192.168.122.0/24 ! -d 192.168.122.0/24 -j MASQUERADE -A nova-compute-snat -j nova-compute-float-snat -A nova-postrouting-bottom -j nova-compute-snat COMMIT # Completed on Tue Apr 2 19:47:27 2013 # Generated by iptables-save v1.4.12 on Tue Apr 2 19:47:27 2013 *mangle :PREROUTING ACCEPT [7969:5385812] :INPUT ACCEPT [7905:5363718] :FORWARD ACCEPT [158:48190] :OUTPUT ACCEPT [6877:8647975] :POSTROUTING ACCEPT [7035:8696165] -A POSTROUTING -o virbr0 -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill -A POSTROUTING -p udp -m udp --dport 68 -j CHECKSUM --checksum-fill COMMIT # Completed on Tue Apr 2 19:47:27 2013 # Generated by iptables-save v1.4.12 on Tue Apr 2 19:47:27 2013 *filter :INPUT ACCEPT [2196774:15856921923] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [2447201:1170227646] :nova-compute-FORWARD - [0:0] :nova-compute-INPUT - [0:0] :nova-compute-OUTPUT - [0:0] :nova-compute-inst-19 - [0:0] :nova-compute-inst-20 - [0:0] :nova-compute-local - [0:0] :nova-compute-provider - [0:0] :nova-compute-sg-fallback - [0:0] :nova-filter-top - [0:0] -A INPUT -j nova-compute-INPUT -A INPUT -i virbr0 -p udp -m udp --dport 53 -j ACCEPT -A INPUT -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT -A INPUT -i virbr0 -p udp -m udp --dport 67 -j ACCEPT -A INPUT -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT -A FORWARD -j nova-filter-top -A FORWARD -j nova-compute-FORWARD -A FORWARD -d 192.168.122.0/24 -o virbr0 -m state --state RELATED,ESTABLISHED -j ACCEPT -A FORWARD -s 192.168.122.0/24 -i virbr0 -j ACCEPT -A FORWARD -i virbr0 -o virbr0 -j ACCEPT -A FORWARD -o virbr0 -j REJECT --reject-with icmp-port-unreachable -A FORWARD -i virbr0 -j REJECT --reject-with icmp-port-unreachable -A OUTPUT -j nova-filter-top -A OUTPUT -j nova-compute-OUTPUT -A nova-compute-FORWARD -i br100 -j ACCEPT -A nova-compute-FORWARD -o br100 -j ACCEPT -A nova-compute-inst-19 -m state --state INVALID -j DROP -A nova-compute-inst-19 -m state --state RELATED,ESTABLISHED -j ACCEPT -A nova-compute-inst-19 -j nova-compute-provider -A nova-compute-inst-19 -s 10.40.0.1/32 -p udp -m udp --sport 67 --dport 68 -j ACCEPT -A nova-compute-inst-19 -s 10.40.0.0/16 -j ACCEPT -A nova-compute-inst-19 -p tcp -m tcp --dport 22 -j ACCEPT -A nova-compute-inst-19 -p icmp -j ACCEPT -A nova-compute-inst-19 -j nova-compute-sg-fallback -A nova-compute-inst-20 -m state --state INVALID -j DROP -A nova-compute-inst-20 -m state --state RELATED,ESTABLISHED -j ACCEPT -A nova-compute-inst-20 -j nova-compute-provider -A nova-compute-inst-20 -s 10.40.0.1/32 -p udp -m udp --sport 67 --dport 68 -j ACCEPT -A nova-compute-inst-20 -s 10.40.0.0/16 -j ACCEPT -A nova-compute-inst-20 -p tcp -m tcp --dport 22 -j ACCEPT -A nova-compute-inst-20 -p icmp -j ACCEPT -A nova-compute-inst-20 -j nova-compute-sg-fallback -A nova-compute-local -d 10.40.0.3/32 -j nova-compute-inst-19 -A nova-compute-local -d 10.40.0.4/32 -j nova-compute-inst-20 -A nova-compute-sg-fallback -j DROP -A nova-filter-top -j nova-compute-local COMMIT # Completed on Tue Apr 2 19:47:27 2013

    Read the article

  • Postfix "warning: cannot get RSA private key from file"

    - by phew
    I just followed this tutorial to set up a postfix mailserver with dovecot and mysql as backend for virtual users. Now I got the most parts working, I can connect to pop3 pop3s imap and imaps. Using echo TEST-MAIL | mail [email protected] works fine, when I log into my hotmail account it shows the email. It also works in reverse hence my MX entry for mydomain.com finally has been propagated, so I am being able to receive emails sent from [email protected] to [email protected] and view them in Thunderbird using STARTTLS via IMAP. Doing a bit more research after I got the error message "5.7.1 : Relay access denied" when trying to send mails to [email protected] using Thunderbird being logged into [email protected], I figured out that my server was acting as an "Open Mail Relay", which - ofcourse - is a bad thing. Digging more into the optional parts of the tutorial like shown workaround.org/comment/2536 and workaround.org/ispmail/squeeze/postfix-smtp-auth I decided to complete these steps aswell to be able to send mails via [email protected] through Mozilla Thunderbird, not getting the error message "5.7.1 : Relay access denied" anymore (as common mailservers reject open relayed emails). But now I ran into an error trying to get postfix working with SMTPS, in /var/log/mail.log it reads Sep 28 17:29:34 domain postfix/smtpd[20251]: warning: cannot get RSA private key from file /etc/ssl/certs/postfix.pem: disabling TLS support Sep 28 17:29:34 domain postfix/smtpd[20251]: warning: TLS library problem: 20251:error:0906D06C:PEM routines:PEM_read_bio:no start line:pem_lib.c:650:Expecting: ANY PRIVATE KEY: Sep 28 17:29:34 domain postfix/smtpd[20251]: warning: TLS library problem: 20251:error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib:ssl_rsa.c:669: That error is logged right after I try to send a mail from my newly installed mailserver using SMTP SSL/TLS via port 465 in Thunderbird. Thunderbird then tells me a timeout occured. Google has a few results concerning that problem, yet I couldn't get it working with any of those. I would link some of them here but as a new user I am only allowed to use two hyperlinks. My /etc/postfix/master.cf looks like smtp inet n - - - - smtpd smtps inet n - - - - smtpd -o smtpd_tls_wrappermode=yes -o smtpd_sasl_auth_enable=yes and nmap tells me PORT STATE SERVICE [...] 465/tcp open smtps [...] my /etc/postfix/main.cf looks like smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU) biff = no append_dot_mydomain = no readme_directory = no #smtpd_tls_cert_file = /etc/ssl/certs/postfix.pem #default postfix generated #smtpd_tls_key_file = /etc/ssl/private/ssl-cert-snakeoil.key #default postfix generated smtpd_tls_cert_file = /etc/ssl/certs/postfix.pem smptd_tls_key_file = /etc/ssl/private/postfix.pem smtpd_use_tls = yes smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_sasl_type = dovecot smtpd_sasl_path = private/auth smptd_sasl_auth_enable = yes smtpd_recipient_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination myhostname = mydomain.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = localhost.com, localhost relayhost = mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains.cf virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps.cf virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps.cf virtual_transport = dovecot dovecot_destination_recipient_limit = 1 mailbox_command = /usr/lib/dovecot/deliver The *.pem files were created like described in the tutorial above, using Postfix To create a certificate to be used by Postfix use: openssl req -new -x509 -days 3650 -nodes -out /etc/ssl/certs/postfix.pem -keyout /etc/ssl/private/postfix.pem Do not forget to set the permissions on the private key so that no unauthorized people can read it: chmod o= /etc/ssl/private/postfix.pem You will have to tell Postfix where to find your certificate and private key because by default it will look for a dummy certificate file called "ssl-cert-snakeoil": postconf -e smtpd_tls_cert_file=/etc/ssl/certs/postfix.pem postconf -e smtpd_tls_key_file=/etc/ssl/private/postfix.pem I think I don't have to include /etc/dovecot/dovecot.conf here, as login via imaps and pop3s works fine according to the logs. Only problem is making postfix properly use the self-generated, self-signed certificates. Any help appreciated! EDIT: I just tried this different tutorial on generating a self-signed certificate for postfix, still getting the same error. I really don't know what else to test. I also did check for the SSL libraries, but all seems to be fine: root@domain:~# ldd /usr/sbin/postfix linux-vdso.so.1 => (0x00007fff91b25000) libpostfix-global.so.1 => /usr/lib/libpostfix-global.so.1 (0x00007f6f8313d000) libpostfix-util.so.1 => /usr/lib/libpostfix-util.so.1 (0x00007f6f82f07000) libssl.so.0.9.8 => /usr/lib/libssl.so.0.9.8 (0x00007f6f82cb1000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007f6f82910000) libsasl2.so.2 => /usr/lib/libsasl2.so.2 (0x00007f6f826f7000) libdb-4.8.so => /usr/lib/libdb-4.8.so (0x00007f6f8237c000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007f6f82164000) libresolv.so.2 => /lib/libresolv.so.2 (0x00007f6f81f4e000) libc.so.6 => /lib/libc.so.6 (0x00007f6f81beb000) libdl.so.2 => /lib/libdl.so.2 (0x00007f6f819e7000) libz.so.1 => /usr/lib/libz.so.1 (0x00007f6f817d0000) libpthread.so.0 => /lib/libpthread.so.0 (0x00007f6f815b3000) /lib64/ld-linux-x86-64.so.2 (0x00007f6f83581000) After following Ansgar Wiechers instructions its finally working. postconf -n contained the lines as it should. The certificate/key check via openssl did show that both files are valid. So it indeed has been a permissions problem! Didn't know that chown'ing the /etc/ssl/*/postfix.pem files to postfix:postfix is not enough for postfix to read the files.

    Read the article

  • Email sent from server with rDNS & SPF being blocked by Hotmail

    - by Canadaka
    I have been unable to send email to users on hotmail or other Microsoft email servers for some time. Its been a major headache trying to find out why and how to fix the issue. The emails being sent that are blocked from my domain canadaka.net. I use Google Aps to host my regular email serverice for my @canadaka.net email addresses. I can sent email from my desktop or gmail to a hotmail without any problem. But any email sent from my server on behalf of canadaka.net is blocked, not even arriving in the junk email. The IP that the emails are being sent from is the same IP that my site is hosted on: 66.199.162.177 This IP is new to me since August 2010, I had a different IP for the previous 3-4 years. This IP is not on any credible spam lists http://www.anti-abuse.org/multi-rbl-check-results/?host=66.199.162.177 The one list spamcannibal.org my IP is listed on seems to be out of my control, says "no reverse DNS, MX host should have rDNS - RFC1912 2.1". But since I use Google for my email hosting, I don't have control over setting up RDNS for all the MX records. I do have Reverse DNS setup for my IP though, it resolves to "mail.canadaka.net". I have signed up for SNDS and was approved. My ip says "All of the specified IPs have normal status." Sender Score: 100 https://www.senderscore.org/lookup.php?lookup=66.199.162.177&ipLookup.x=55&ipLookup.y=14 My Mcafee threat level seems fine I have a TXT SPF record setup, I am currently using xname.org as my DNS, and they don't have a field for SPF, but their FAQ says to add the SPF info as a TXT entry. v=spf1 a include:_spf.google.com ~all Some "SPF checking" tools ive used detect that my domain has a valid SPF, but others don't. Like Microsoft's SPF wizard, i think this is because its specifically looking for an SPF record and not in the TXT. "No SPF Record Found. A and MX Records Available". From my home I can run "nslookup -type=TXT canadaka.net" and it returns: Server: google-public-dns-a.google.com Address: 8.8.8.8 Non-authoritative answer: canadaka.net text = "v=spf1 a include:_spf.google.com ~all" One strange thing I found is i'm unable to ping hotmail.com or msn.com or do a "telnet mail.hotmail.com 25". I am able to ping gmail.com and many other domains I tried. I tried changing my DNS servers to Google's Public DNS and did a ipconfig /flushdns but that had no effect. I am however able to connect with telnet to mx1.hotmail.com This is what the email headers look like when I send to a Google email server and I receive the email with no troubles. You can see that SPF is passing. Delivered-To: [email protected] Received: by 10.146.168.12 with SMTP id q12cs91243yae; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Received: by 10.43.48.7 with SMTP id uu7mr4292541icb.68.1298858509242; Sun, 27 Feb 2011 18:01:49 -0800 (PST) Return-Path: Received: from canadaka.net ([66.199.162.177]) by mx.google.com with ESMTP id uh9si8493137icb.127.2011.02.27.18.01.45; Sun, 27 Feb 2011 18:01:48 -0800 (PST) Received-SPF: pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) client-ip=66.199.162.177; Authentication-Results: mx.google.com; spf=pass (google.com: domain of [email protected] designates 66.199.162.177 as permitted sender) [email protected] Message-Id: <[email protected] Received: from coruscant ([127.0.0.1]:12907) by canadaka.net with [XMail 1.27 ESMTP Server] id for from ; Sun, 27 Feb 2011 18:01:29 -0800 Date: Sun, 27 Feb 2011 18:01:29 -0800 Subject: Test To: [email protected] From: XXXX Reply-To: [email protected] X-Mailer: PHP/5.2.13 I can send to gmail and other email services fine. I don't know what i'm doing wrong! UPDATE 1 I have been removed from hotmails IP block and am now able to send emails to hotmail, but they are all going directly to the JUNK folder. UPDATE 2 I used Telnet to send a test message to port25.com, seems my SPF is not being detected. Result: neutral (SPF-Result: None) canadaka.net. SPF (no records) canadaka.net. TXT (no records) I do have a TXT record, its been there for years, I did change it a week ago. Other sites that allow you to check your SPF detect it, but some others like Microsofts Wizard doesn't. This iw what my SPF record in my xname.org DNS file looks like: canadaka.net. 86400 IN TXT "v=spf1 a include:_spf.google.com ~all" I did have a nameserver as my 4th option that doens't have the TXT records since it doens't support it. So I removed it from the list and instead added wtfdns.com as my 4th adn 5th nameservers, which does support TXT.

    Read the article

  • Allow Incoming Responses Apache. On Ubuntu 11.10 - Curl

    - by Daniel Adarve
    I'm trying to get a Curl Response from an outside server, however I noticed I cant neither PING the server in question nor connect to it. I tried disabling the iptables firewall but I had no success. My server is running behind a Cisco Linksys WRTN310N Router with the DD-wrt firmware Installed. In which I already disabled the firewall. Here are my network settings: Ifconfig eth0 Link encap:Ethernet HWaddr 00:26:b9:76:73:6b inet addr:192.168.1.120 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::226:b9ff:fe76:736b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:49713 errors:0 dropped:0 overruns:0 frame:0 TX packets:30987 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:52829022 (52.8 MB) TX bytes:5438223 (5.4 MB) Interrupt:16 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:341 errors:0 dropped:0 overruns:0 frame:0 TX packets:341 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:27604 (27.6 KB) TX bytes:27604 (27.6 KB) /etc/resolv.conf nameserver 192.168.1.1 /etc/nsswitch.com passwd: compat group: compat shadow: compat hosts: files dns networks: files protocols: db files services: db files ethers: db files rpc: db files netgroup: nis /etc/host.conf order hosts,bind multi on /etc/hosts 127.0.0.1 localhost 127.0.0.1 callcenter # The following lines are desirable for IPv6 capable hosts ::1 ip6-localhost ip6-loopback fe00::0 ip6-localnet ff00::0 ip6-mcastprefix ff02::1 ip6-allnodes ff02::2 ip6-allrouters /etc/network/interfaces # The loopback network interface auto lo iface lo inet loopback # The primary network interface auto eth0 iface eth0 inet static address 192.168.1.120 netmask 255.255.255.0 network 192.168.1.1 broadcast 192.168.1.255 gateway 192.168.1.1 The Url to which im trying to get a connection to is https://www.veripayment.com/integration/index.php When I ping it on terminal heres what I get daniel@callcenter:~$ ping https://www.veripayment.com/integration/index.php ping: unknown host https://www.veripayment.com/integration/index.php daniel@callcenter:~$ ping www.veripayment.com PING www.veripayment.com (69.172.200.5) 56(84) bytes of data. --- www.veripayment.com ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms PHP Function in codeigniter public function authorizePayment(){ //--------------------------------------------------- // Authorize a payment //--------------------------------------------------- // Get variables from POST array $post_str = "action=payment&business=" .urlencode($this->input->post('business')) ."&vericode=" .urlencode($this->input->post('vericode')) ."&item_name=" .urlencode($this->input->post('item_name')) ."&item_code=" .urlencode($this->input->post('item_code')) ."&quantity=" .urlencode($this->input->post('quantity')) ."&amount=" .urlencode($this->input->post('amount')) ."&cc_type=" .urlencode($this->input->post('cc_type')) ."&cc_number=" .urlencode($this->input->post('cc_number')) ."&cc_expdate=" .urlencode($this->input->post('cc_expdate_year')).urlencode($this->input->post('cc_expdate_month')) ."&cc_security_code=" .urlencode($this->input->post('cc_security_code')) ."&shipment=" .urlencode($this->input->post('shipment')) ."&first_name=" .urlencode($this->input->post('first_name')) ."&last_name=" .urlencode($this->input->post('last_name')) ."&address=" .urlencode($this->input->post('address')) ."&city=" .urlencode($this->input->post('city')) ."&state_or_province=" .urlencode($this->input->post('state_or_province')) ."&zip_or_postal_code=" .urlencode($this->input->post('zip_or_postal_code')) ."&country=" .urlencode($this->input->post('country')) ."&shipping_address=" .urlencode($this->input->post('shipping_address')) ."&shipping_city=" .urlencode($this->input->post('shipping_city')) ."&shipping_state_or_province=" .urlencode($this->input->post('shipping_state_or_province')) ."&shipping_zip_or_postal_code=".urlencode($this->input->post('shipping_zip_or_postal_code')) ."&shipping_country=" .urlencode($this->input->post('shipping_country')) ."&phone=" .urlencode($this->input->post('phone')) ."&email=" .urlencode($this->input->post('email')) ."&ip_address=" .urlencode($this->input->post('ip_address')) ."&website_unique_id=" .urlencode($this->input->post('website_unique_id')); // Send URL string via CURL $backendUrl = "https://www.veripayment.com/integration/index.php"; $this->curl->create($backendUrl); $this->curl->post($post_str); $return = $this->curl->execute(); $result = array(); // Explode array where blanks are found $resparray = explode(' ', $return); if ($resparray) { // save results into an array foreach ($resparray as $resp) { $keyvalue = explode('=', $resp); if(isset($keyvalue[1])){ $result[$keyvalue[0]] = str_replace('"', '', $keyvalue[1]); } } } return $result; } This gets an empty result array. This function however works well in the previous server where the script was hosted before. No modifications where made whatsoever Thanks in Advance

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >