Search Results

Search found 2006 results on 81 pages for 'xxx xxx'.

Page 25/81 | < Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >

  • SSH command from PHP script - nothing, yet work at cmd line

    - by waxical
    I'm working on an EC2 box and trying to SSH command another box. The command works in command-line, even php -a interactive. However it does not work when running as apache. Example cmd:- system('ssh -i /home/me/keys/key.pem [email protected] "ls"'); I've tried adding apache to wheel group, and gshadow on both boxes. I've also just tried chowning the pem file to apache. Nothing. Yet the command response fine in the two other use cases outlines. What's going on here? Anyone know?

    Read the article

  • Error from DNS validation service: RFC821 4.3

    - by ferdi
    So I've managed to setup bind9 and a mail server, but it seems there is something wrong. I don't quite under stand this error: The configuration of your mail servers and your DNS are not ok! The report of the test is: mail.domain.com. - mail.domain.com - 208.xxx.xxx.xxx - lisa.domain.com Spam recognition software and RFC821 4.3 (also RFC2821 4.3.1) state that the hostname given in the SMTP greeting MUST have an A record pointing back to the same server. Can anyone explain this to me in a bit more detail, and maybe point me in a direction to solve this?

    Read the article

  • AWS:EC2:: Could not connect FTP client?

    - by heathub
    My Server OS: Amazon Linux I am trying to set up ftp. I have: Installed vsftpd open port 20-21 open port 1024 - 1048 Basically, I followed every of these steps Start vsftpd service (the status indicate [ok]) I use filezilla for my ftp client. Here is my setting/configuration: Host: ec2-XX-XX-XXX-XX.compute-1.amazonaws.com Port: -(blank, but I have tried 20 and 21 though) Server Type: FTP - File Transder Protocol Logon Type: Normal Username: (tried root and ec2-user) Transfer mode: Tried passive and active I always has this error: Status: Waiting to retry... Status: Resolving address of ec2-XX-XX-XXX-XX.compute-1.amazonaws.com Status: Connecting to XX.XX.XXX.XX:21... Error: Connection timed out Error: Could not connect to server Have I missed any configuration/settings? EDIT After execute the /sbin/iptables -L -n Here is the result: Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination

    Read the article

  • Does sphider can be a search engine for intranet ?

    - by garcon1986
    Hello, Sphinx is a kind of search engine but it should be installed on the server. But i can't install it on the server, so i have to find another solution. Actually, i have tested sphider in a small site. But when I want to integrate it in my intranet, it doesn't work. The error code: 1. Retrieving: http://localhost/XXX at 16:44:20. Updated Link To http://localhost/XXX Size of page: 1.54kb. Starting indexing at 16:44:20. Page contains less than 10 words Links found: 1. New links: 1 2. Retrieving: http://localhost/XXX.php at 16:44:20. Unreachable: http 404 Links found: 0. New links: 0 Anyone has ideas? Thanks

    Read the article

  • Logical move of a server to UK, what do I do with the SSL certificates

    - by flyfishr64
    I have been asked to move a rails application from the US to the UK. This involves bringing up the rails stack on Ubuntu 8.04.4; that's completed. I'm stumped with the SSL configuration though. The plan was to bring this server up with the same domain name but temporarily use a subdomain (app2.xxx.com instead of app.xxx.com) during the move and for testing, then rename it to app.xxx.com when we're ready for the cutover (does that make sense?). In the meantime, we need a new cert for the app2 subdomain. So to generate a CSR, I need a server key but do I need a new one, or should I copy the one from the existing production server?

    Read the article

  • Postgres backup

    - by Abbass
    Hello, I have a Bacula script that does an automatic backup of a Postgres Database. The script makes two backups using (pg_dump) of the data base : The schema only and the data only. /usr/bin/pg_dump --format=c -s $dbname --file=$DUMPDIR/$dbname.schema.dump /usr/bin/pg_dump --format=c -a $dbname --file=$DUMPDIR/$dbname.data.dump The problem is that I can't figure out how to restore it with pg_restore. Do I need to create the database and the users before then restore the schema and finally the data. I did the following : pg_restore --format=c -s -C -d template1 xxx.schema.dump pg_restore --format=c -a -d xxx xxx.data.dump This first restore creates the database with emtpy tables but the second gives many error like this one : pg_restore: [archiver (db)] COPY failed: ERROR: insert or update on table "Table1" violates foreign key constraint "fkf6977a478dd41734" DETAIL: Key (contentid)=(1474566) is not present in table "Table23". Any ideas?

    Read the article

  • Redirecting a subdomain to subdomain/folder

    - by Johnbritto
    I have linux server with plesk panel. I am running sourceforge VM in NAT mode with static ip 172.16.63.XX. In my host i have configured subdomain's (vhost.conf) with proxypass to connect with VM machine.. I can access sourceforge VM with http. I am searching for http redirecting to https. http://xxx.mydomain.com -- https://xxx.mydomain.com/sf/sfmain/do/home/ . just need to know, If I own a SSL for mydomain.com. if i redirect a xxx.mydomain.com to mydomain.com/folder will the SSL will be applied to redirected domain? i.e mydomain.com/folder?

    Read the article

  • What steps are required to get DB2 working again after renaming the Windows XP system it was running on?

    - by Suppressingfire
    I think this is a fairly well known problem, but I haven't found a really solid solution to add to my toolbox. Here's the sequence of steps that leads to the problem: Install Windows (e.g., XP), naming the system XXX Install DB2 and create some databases Rename the system from XXX to YYY (via the System control panel's Computer Name tab Reboot and find DB2 unable to start How can I get DB2 up and running again without having to reinstall it and without having to rename the system back to XXX? I did find a blog post that hints at some registry values to tweak, but I'm hoping the SF community can come up with a solution in which I can have more confidence.

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • mpirun -np N, what if N is larger than my core number?

    - by Daniel
    Say I have a 4-core workstation, what would linux (Ubuntu) do if I execute mpirun -np 9 XXX Q1. Will 9 run immediately together, or they will run 4 after 4? Q2. I suppose that using 9 is not good, because the remainder 1, it will make the computer confused, (I don't know is it going to be confused at all, or the "head" of the computer will decide which core among the 4 cores will be used?) Or it will be randomly picked. Who decide which one core to call? Q3. If I feel my cpu is not bad and my ram is okay and large enough, and my case is not very big. Is it a good idea in order to fully use my cpu and ram, that I do mpirun -np 8 XXX, or even mpirun -np 12 XXX. Q4. Who decides all of these effciency optimization, Ubuntu, or linux, or motherboard or cpu? Your enlightenment would be really appreciated.

    Read the article

  • Camera Collision inside the room model

    - by sanddy
    I am having a problem in Calculating the camera collision for my Room model which consists of sofa, tables and other models. The users shall be moving the camera front, back, rotating so i need to make sure that the camera does not collide with any of the models with in the room. I have treated all my models inside the room by BoundingBox[] and the camera by BoundingSphere. So, far i have implemented collision by looking into the tutorial from http://www.toymaker.info/Games/XNA/html/xna_model_collisions.html which was great. But, I guess the problem lies in the Transformation part. I debugged and found some points to be at Vector(-XXX,-XXX,-XXX) where X is digit. Also i found my radius of some models where too large(in thousand, i just looked into its radius value before converting to BoundingBox). Do I need to scale the model for collision??? Below are my code:- On My LoadContent(): Matrix[] transforms = new Matrix[myModel.Bones.Count]; myModel.CopyAbsoluteBoneTransformsTo(transforms); int index = 0; box = new List<BoundingBox>(); BoundingBox worldModel = Utility.CalculateBoundingBox(myModel); foreach (ModelMesh mesh in myModel.Meshes) { Vector3[] obb = new Vector3[8]; worldModel.GetCorners(obb); Vector3[] asdf = (Vector3[])obb.Clone(); Vector3.Transform(obb, ref transforms[mesh.ParentBone.Index], obb); BoundingBox worldBox = BoundingBox.CreateFromPoints(obb); box.Add(worldBox); index++; } On CameraPosition Update: BoundingSphere bs = new BoundingSphere(this.cameraPos, 5.0f); if (RoomWalkthrough.Utility.CheckCollision(bs, bb)) { // Do Something } Please Help.

    Read the article

  • Optimising website IP for location

    - by Liam Sorsby
    From my understanding of SEO, websites are optimised for the current location of their IP address. For example if xxx.xxx.xxx.xx resolves to the UK then you are more likely to get higher rankings in the UK then you are in the USA. However, my query is when you use a CDN you are storing a cached version of your website across multiple servers at strategic locations across the globe to reduce load time in locations that your trying to target. Now if you use a CDN and geo-locate the website URL then it only resolves back to the USA (where our IP address resolves too) but it doesn't say it resolves to any other countries. As far as I know you can have multiple IP address resolving to one domain (from different countries). Do CDN's really help to optimise the location of your website or are they soley meant to optimise load time? Is there a better way to optimise for multiple countries with regards to the resolution of the IP address? Are VPN's as per this post here relevant to this? Any advice would be helpful.

    Read the article

  • "Can't open display" even after access with xhost

    - by Yann
    I'm trying to run a graphical program remotely, without using ssh. I've set the display variable on the server (let's say server.com, Linux, not ubuntu, and no su rights) to point to my workstation (workstation.com, ubuntu 10.04) setenv DISPLAY workstation.com:0 Then on my workstation I've tried both xhost +server.com and xhost + Then I ssh into the server (to test things): ssh [email protected] and try to run xclock, and get the following error: Error: Can't open display: workstation.com:0 I've looked at /etc/ssh/ssh_config on the workstation and I should be forwarding correctly: X11Forwarding yes. How do I go about troubleshooting this? What logs on the workstation document these failed attempts? To explain why I'm doing this: I want to run a batch job on a server to debug an MPI-based parallel program. I want to run xterm as the batch job executable, per the instructions provided by the system admins. This setup use to work. I reinstalled things on my workstation and since then I frequently get one-time message along the lines The authenticity of host 'hostname (XXX.XXX.XXX.XX)' can't be established. My attempt to fix the above was to move my ~/.ssh/known_hosts file to a back up on both server and host, and then to ssh from each to the other with the flag -o StrictHostKeyChecking=no. I no longer get that message, but I was wondering does this play a part in why X11 forwarding is not working?

    Read the article

  • How can I remove unmounted SD Card icons from my desktop?

    - by user75286
    I have been using some audio utilities in Ubuntu 11.10 to tweak .mp3 files on my phone (Motorola Photon 4G). I connect via USB... both my phone and the internal SD card are mounted as two separate drives. The SD card has an unusual drive name with some odd characters. When I'm finished, I unmount my phone (or "safely remove drive"), but the SD card can't be unmounted. I've mounted and unmounted my phone on 4 occasions now, and there are now 4 SD card drive icons that I can't remove from the desktop. I tried using the gconf-editor/apps/nautilus/desktop trick to make drives invisible and it's not working. Right-clicking on the icons and selecting "unmount" produced the following error message; (I can't type the unusual drive name characters... replaced with xxx) Unable to unmount xxx umount: /media/xxx is not mounted (according to mtab) How can I remove the unwanted icons from the desktop and is there a method for avoiding this problem in the future? Thanks!

    Read the article

  • ActiveDirectoryMembershipProvider and ADAM (or AD LDS) and SetPassword

    - by Iulian
    By the subject line it seems to be a rather broad subject and I need some help here. Basically what I want is to use ActiveDirectoryMembershipProvider with an ADAM instance to authenticate users in an ASP.NET web application. My development environment is a windows 7 machine with an AD LDS instance on it whilst the QA server is a Windows 2003 server with an ADAM instance on it. I have all the required users on both instances plus one with adminsitrator role (CN=Admin,CN=xxx,DC=xxx,C=xx) which I want to use as the connection user. Using connectonProtecton="None" connectionUsername="CN=Admin,CN=xxx,DC=xxx,C=xx" connectionPassword="xxx" I am able to authenticate on both environments (dev & qa). If I change to the connectionProtection to "Secure" I am not able to authenticate anymore; the error I get is "Parser Error Message: Unable to establish secure connection with the server" To me it sounds wrong to use connectionProtection="None" although I found on the net a lot of samples using this setting. Can I use connectionProtection="Secure" to connect to an ADAM instance using an account defined on that instance having Administrator role? What other choices do I have (like using an domain account)? What if my machine where I am to deploy the application is not a part of the domain, will this affect in any way the behavior? I am novice in the respect so I would really appreciate some clear answers or some directions as where to look? Now beside the "signing in" feature of the ActiveDirectoryMembershipProvider I also want to add an extra one, which is setting the password without knowing the old one (something that will be used by a "reset password" feature). So I added a couple of extension methods to the provider, and used System.DirectoryServices classes like DirectoryEntry and the like. When creating a directory entry I use the same credentials provided in web.config for the provider minus the AuthenticationType as I don't know what is right combination of the flags that corresponds to None/Secure. I am able to use Invoke "SetPassword" with ADS_OPTION_PASSWORD_METHOD option as ADS_PASSWORD_ENCODE_CLEAR on my dev machine (w/ AD LDS instance); nevertheless on qa environment (w/ ADAM instance) I am getting an error like "Exception Details: System.DirectoryServices.DirectoryServicesCOMException: An operations error occurred. (Exception from HRESULT: 0x80072020)" I am quite sure it is not about AD LDS vs ADAM but probably another configuration / permission issue. So can anyone help me with some hints on how to use this SetPassword feature? And as a general question what are the best practices when it comes to using ADAM regarding security, programming etc? Thanks in advance Iulian

    Read the article

  • php page navigation by serial number

    - by ilnur777
    Can anyone help in this php page navigation script switch on counting normal serial number? In this script there is a var called "page_id" - I want this var to store the real page link by order like 0, 1, 2, 3, 4, 5 ... $records = 34; // total records $pagerecord = 10; // count records to display per page if($records<=$pagerecord) return; $imax = (int)($records/$pagerecord); if ($records%$pagerecord>0)$imax=$imax+1; if($activepage == ''){ $for_start=$imax; $activepage = $imax-1; } $next = $activepage - 1; if ($next<0){$next=0;} $prev = $activepage + 1; if ($prev>=$imax){$prev=$imax-1;} $end = 0; $start = $imax; if($activepage >= 0){ $for_start = $activepage + $rad + 1; if($for_start<$rad*2+1)$for_start = $rad*2+1; if($for_start>=$imax){ $for_start=$imax; } } if($activepage < $imax-1){ $str .= ' <a href="?domain='.$domain_name.'&page='.($start-1).'&page_id=xxx"><<< End</a> <a href="?domain='.$domain_name.'&page='.$prev.'&page_id=xxx">< Forward</a> '; } $meter = $rad*2+1; for($i=$for_start-1; $i>-1; $i--){ $meter--; $line = ''; if ($i>0)$line = ""; if($i<>$activepage){ $str .= "<a href='?domain=".$domain_name."&page=".$i."&page_id=xxx'>".($i)."</a> ".$line." "; } else { $str .= " <b class='current_page'>".($i)."</b> ".$line." "; } if($meter=='0'){ break; } } if($activepage > 0){ $str .= " <a href='?domain=".$domain_name."&page=".$next."&page_id=xxx'>Back ></a> <a href='?domain=".$domain_name."&page=".($end)."&page_id=xxx'>Start >>></a> "; } return $str; Really need help with this stuff! Thanks in advance!

    Read the article

  • openDatabase Hello World

    - by cf_PhillipSenn
    I'm trying to learn about openDatabase, and I think this I'm getting it to INSERT INTO TABLE1, but I can't verify that the SELECT * FROM TABLE1 is working. <html> <head> <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1"); </script> <script type="text/javascript"> var db; $(function(){ db = openDatabase('HelloWorld'); db.transaction( function(transaction) { transaction.executeSql( 'CREATE TABLE IF NOT EXISTS Table1 ' + ' (TableID INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, ' + ' Field1 TEXT NOT NULL );' ); } ); db.transaction( function(transaction) { transaction.executeSql( 'SELECT * FROM Table1;',function (transaction, result) { for (var i=0; i < result.rows.length; i++) { alert('1'); $('body').append(result.rows.item(i)); } }, errorHandler ); } ); $('form').submit(function() { var xxx = $('#xxx').val(); db.transaction( function(transaction) { transaction.executeSql( 'INSERT INTO Table1 (Field1) VALUES (?);', [xxx], function(){ alert('Saved!'); }, errorHandler ); } ); return false; }); }); function errorHandler(transaction, error) { alert('Oops. Error was '+error.message+' (Code '+error.code+')'); transaction.executeSql('INSERT INTO errors (code, message) VALUES (?, ?);', [error.code, error.message]); return false; } </script> </head> <body> <form method="post"> <input name="xxx" id="xxx" /> <p> <input type="submit" name="OK" /> </p> <a href="http://www.google.com">Cancel</a> </form> </body> </html>

    Read the article

  • How does the Trash Can work, and where can I find official documentation, reference, or specification for it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always uses <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, it's annoying: if I have 15 users, I end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why does it never use it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash; it gives an error. Try to delete any file, it says "it can't move to trash". Ok, I know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why the error then? Bug? Why must I change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from the standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find a technical reference for Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what I am doing wrong, especially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for the sticky bit (VFAT, NTFS), it probably doesn't have for permissions either (at least VFAT surely doesn't). So what prevents one user from purging / restoring other users' ./Trash-xxx ? If one can read/write his own Trash, one can do the same for the whole drive, including other's trashes, correct? Or does Gnome have some kind of "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permissions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really prefer to use /.Trash/xxx format... For the root issue: for the / partition, I can use trash as root, and it goes to /root/.local/shate/Trash. But if I click on Nautilus "Trash" (as root), I get an error. Don't you? So files are correctly trashed, but I can't access it. All I can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc.). For non-/ partitions (or at least for VFAT/NTFS), I can not even use trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permanently delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really clutter the desktop and/or Nautilus. I'd rather have it pre-mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives. So, if Ubuntu/Gnome truly follows the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • How does Trash Can works? Where can i find official specification / documentation / reference about it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always use <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, its annoying: if i have 15 users, i end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why it never uses it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash, it gives error. Try to delete any file, it says "it cant move to trash". Ok, i know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why error then? Bug? Why do i must change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find technical reference about Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what am i doing wrong, specially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for sticky bit (VFAT, NTFS), it probably dont have for permitions either (at least VFAT surely doesnt). So what prevents one user for purging / restoring other users ./Trash-xxx ? If one can read/write his own Trash, he can also do the same for the whole drive, including other's trashes, isnt it? Or does Gnome has any "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permitions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really want to use /.Trash/xxx format... For the root issue: for the / partition, i can trash as root, and it goes to /root/.local/shate/Trash. But if i click on Nautilus "Trash" (as root), i get an error. Dont you? So files are correctly trashed, but i cant access it. All i can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc) For non-/ partitions (or at least for VFAT/NTFS), I can not even trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permantly delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really cluttter desktop and/or Nautilus. Id rather have it pre mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives Si, if Ubuntu/Gnome truly follow the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • Move on and look elsewhere, or confront the boss?

    - by Meister
    Background: I have my Associates in Applied Science (Comp/Info Tech) with a strong focus in programming, and I'm taking University classes to get my Bachelors. I was recently hired at a local company to be a Software Engineer I on a team of about 8, and I've been told they're looking to hire more. This is my first job, and I was offered what I feel to be an extremely generous starting salary ($30/hr essentially + benefits and yearly bonus). What got me hired was my passion for programming and a strong set of personal projects. Problem: I had no prior experience when I interviewed, so I didn't know exactly what to ask them about the company when I was hired. I've spotted a number of warning signs and annoyances since then, such as: Four developers when I started, with everyone talking about "Ben" or "Ryan" leaving. One engineer hired thirty days before me, one hired two weeks after me. Most of the department has been hiring a large number of people since I started. Extremely limited internet access. I understand the idea from an IT point of view, but not only is Facebook blocked, but so it Youtube, Twitter, and Pandora. I've also figured out that they block all access to non-DNS websites (http://xxx.xxx.xxx.xxx/) and strangely enough Miranda-IM. Low cubicles. Which is fine because I like my immediate coworkers, but they put the developers with the customer service, customer training, and QA department in a huge open room. Noise, noise, noise, and people stop to chitchat all day long. Headphones only go so far. Several emails have been sent out by my boss since I started telling us programmers to not talk about non-work-related-things like Video Games at our cubicles, despite us only spending maybe five minutes every few hours doing so. Further digging tells me that this is because someone keeps complaining that the programmers are "slacking off". People are looking over my shoulder all day. I was in the Freenode webchat to get help with a programming issue, and within minutes I had an email from my boss (to all the developers) telling us that we should NOT be connected to any outside chat servers at work. Version control system from 2005 that we must access with IE and keep the Java 1.4 JRE installed to be able to use. I accidentally updated to Java 6 one day and spent the next two days fighting with my PC to undo this "problem". No source control, no comments on anything, no standards, no code review, no unit testing, no common sense. I literally found a problem in how they handle string resource translations that stems from the simple fact that they don't trim excess white spaces, leading to developers doing: getResource("Date: ") instead of: getResource("Date") + ": ", and I was told to just add the excess white spaces back to the database instead of dealing with the issue directly. Some of these things I'd like to try to understand, but I like having IRC open to talk in a few different rooms during the day and keep in touch with friends/family over IM. They don't break my concentration (not NEARLY as much as the lady from QA stopping by to talk about her son), but because people are looking over my shoulder all day as they walk by they complain when they see something that's not "programmer-looking work". I've been told by my boss and QA that I do good, fast work. I should be judged on my work output and quality, not what I have up on my screen for the five seconds you're walking by So, my question is, even though I'm just barely at my 90 days: How do you decide to move on from a job and looking elsewhere, or when you should start working with your boss to resolve these issues? Is it even possible to get the boss to work with me in many of these things? This is the only place I heard back from even though I sent out several resume's a day for several months, and this place does pay well for putting up with their many flaws, but I'm just starting to get so miserable working here already. Should I just put up with it?

    Read the article

  • AWS Load balancer connection reset

    - by joshmmo
    I have an ELB set up with two instances. The issue I have with it is that when I do not add www. to it, the ELB just hangs. This is some info I get when I spider with wget: Spider mode enabled. Check if remote file exists. --2013-06-20 13:40:54-- http://learning.example.com/ Resolving learning.example.com... 54.xxx.x.x53, 50.xx.xxx.x71 Connecting to learning.example.com|54.xxx.x.x53|:80... connected. HTTP request sent, awaiting response... No data received. Retrying. when I add www. it works great. I have a GoDaddy SSL cert that I added to the listener section that covers 3 domains, www.learning.example.com, files.learning.example.com and learning.example.com. These are my listener settings: - HTTP 80 HTTPS 443 N/A N/A - SSL 443 SSL 443 Change canvasNew (Change) My EC2 instances are running apache2 on Ubuntu 12.04. I will be happy to post my vhosts file if needed. However, when I ran the server with the domains pointing to just one EC2 instance things worked fine. How can I fix this issue for learning.example.com? Why does www work just fine? A second question would be what is the difference between instance protocol and load balancer protocol? EDIT: Here are the dig results for learning.example.com from yesterday. I changed the DNS entry to point to one instance to make sure it was the elb. When I switch it back I will do it for www.learning.example.com ; <<>> DiG 9.9.1-P2 <<>> learning.example.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20210 ;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;learning.example.com. IN A ;; ANSWER SECTION: learning.example.com. 2559 IN CNAME canvas-22222222222.us-west-1.elb.amazonaws.com. canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 54.xxx.x.x53 canvas-22222222222.us-west-1.elb.amazonaws.com. 60 IN A 50.xx.xxx.x71 ;; Query time: 83 msec ;; SERVER: 10.x.xx.20#53(10.x.xx.20) ;; WHEN: Thu Jun 20 13:40:47 2013 ;; MSG SIZE rcvd: 137 EDIT 2: Here is some more info that might be helpful. Port Configuration: 80 (HTTP) forwarding to 443 (HTTPS) Backend Authentication: Disabled Stickiness: Disabled(edit) 443 (SSL, Certificate: canvasNew) forwarding to 443 (SSL) Backend Authentication: Disabled So I switched everything to one EC2 IP address to bypass the elb to make sure things are working. It's running great. www and the non-www url work perfectly fine. Its only when I switch things to the ELB that learning.example.com hangs and www.learning.example.com works. Hopefully you can get some ideas flowing.

    Read the article

  • Sending email through proxy using gmail smtp

    - by baron
    Hello everyone, Trying to send some email in my C# app. I am behind a proxy - which is no doubt why the code isn't working. This is what I have so far: App.Config: <system.net> <defaultProxy enabled="false"> <proxy proxyaddress="xxx.xxx.xxx.xxx"/> </defaultProxy> <mailSettings> <smtp deliveryMethod="Network"> <network host="smtp.gmail.com" port="587"/> </smtp> </mailSettings> </system.net> Code: var username = "..."; var password = "..."; var fromEmail = "..."; var toEmail = "..."; var body = "Test email body"; var subject = "Test Subject Email"; var client = new SmtpClient("smtp.gmail.com", 587) { Credentials = new NetworkCredential(username, password), EnableSsl = true }; try { client.Send(fromEmail, toEmail, subject, body); } catch (Exception e) { MessageBox.Show(e.Message); } Everytime I get System.Net.WebException: The remote name could not be resolved: 'smtp.gmail.com' Where/how do I start to debug?

    Read the article

  • ActionMailer and Exchange

    - by Jason Nerer
    Hello Community, I successfully send Mails via SMTP using my Rails App and my Postfix Server. Now I need to move to an Exchange: Microsoft ESMTP MAIL Service, Version: 6.0.3790.3959 that has POP3 and SMTP support enabled. I use actionmailer 1.2.5 and am not able to successfully login to the server while trying to send a mail. In case I use Mail.app sending and recieving works fine as long as I change the authentication schema to "Password". Checking the server looks like so: READ Nov 18 10:37:00.509 [kCFStreamSocketSecurityLevelNone] -- host:mail.my-mail-server-domain.com -- port:25 -- socket:0x11895cf20 -- thread:0x11b036a10 250-mail.my-mail-server-domain.com Hello [xxx.xxx.xxx.xxx] 250-TURN 250-SIZE 250-ETRN 250-PIPELINING 250-DSN 250-ENHANCEDSTATUSCODES 250-8bitmime 250-BINARYMIME 250-CHUNKING 250-VRFY 250-X-EXPS GSSAPI NTLM LOGIN 250-X-EXPS=LOGIN 250-AUTH GSSAPI NTLM LOGIN 250-AUTH=LOGIN 250-X-LINK2STATE 250-XEXCH50 250 OK WROTE Nov 18 10:37:00.852 [kCFStreamSocketSecurityLevelNone] -- host:mail.my-mail-server-domain.com -- port:25 -- socket:0x11895cf20 -- thread:0x11b036a10 AUTH LOGIN READ Nov 18 10:37:01.848 [kCFStreamSocketSecurityLevelNone] -- host:mail.my-mail-server-domain.com -- port:25 -- socket:0x11895cf20 -- thread:0x11b036a10 235 2.7.0 Authentication successful. So authentication method :login seems to be properly supported. Now when it comes to my configuration for actionmailer it looks like so: ActionMailer::Base.server_settings = { :address => "mail.my-mail-server-domain.com", :port => 25, :domain => "my-mail-server-domain.com", :authentication => :login, :user_name => "myusername", :password => "mypassword" } And I get authentication errors over and over. I also tried to change :user_name => "my-mail-server-domain.com\myusername" :user_name => "my-mail-server-domain.com\\myusername" :user_name => "myusername/my-mail-server-domain.com" :user_name => "[email protected]" but nothing works. Can anyone help me? Regards. Jason

    Read the article

  • iPhone Core Data Lightweight Migration error: reason = "Can't find model for source store";

    - by tul697
    Steps taken: 1. Added Data Model version: Changed my XXX.xcdatamodel to XXX.xcdatamodeId with Design - Data Model - Add Model Version. Set the new XXX 2.xcdatamodel as current version Added an attribute to XXX 2.xcdatamodel Added NSMigratePersistentStoresAutomaticallyOption and NSInferMappingModelAutomaticallyOption like most tutorials, I added the option in the addPersistentStoreWithType. ran the code and I got this error: Unresolved error Error Domain=NSCocoaErrorDomain Code=134130 UserInfo=0x146bb80 "Operation could not be completed. (Cocoa error 134130.)", { URL = file://localhost/Users/tleung/Library/Application%20Support/iPhone%20Simulator/3.0/Applications/B585CDFC-17C3-4A44-84E2-0B75893C46B8/Documents/favorites.sqlite; metadata = { NSPersistenceFrameworkVersion = 241; NSStoreModelVersionHashes = { City = <70ea1f9f aaa9af29 52d2bfe4 3071d97f 8224f765 d69928d5 e5844120 52742a35; StationStore = <40d8093a 1d7d00ec 178b4374 36dfc137 ccfa3a88 87e2d467 69e8ae7e d4c49dbb; }; NSStoreModelVersionHashesVersion = 3; NSStoreModelVersionIdentifiers = ( ); NSStoreType = SQLite; NSStoreUUID = "9DD342A6-1F68-4997-A097-096DC96D7BF3"; }; reason = "Can't find model for source store"; } I've also tried NSString *path = [[NSBundle mainBundle] pathForResource:@"YOURDB" ofType:@"momd"]; NSURL *momURL = [NSURL fileURLWithPath:path]; managedObjectModel = [[NSManagedObjectModel alloc] initWithContentsOfURL:momURL]; as suggested by other posts with no success. It seems that it can't find ANY of my models... anyone have any idea?

    Read the article

  • MKMapView loading all annotation views at once (including those that are outside the current rect)

    - by jmans
    Has anyone else run into this problem? Here's the code: - (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(WWMapAnnotation *)annotation { // Only return an Annotation view for the placemarks. Ignore for the current location--the iPhone SDK will place a blue ball there. NSLog(@"Request for annotation view"); if ([annotation isKindOfClass:[WWMapAnnotation class]]){ MKPinAnnotationView *browse_map_annot_view = (MKPinAnnotationView *)[mapView dequeueReusableAnnotationViewWithIdentifier:@"BrowseMapAnnot"]; if (!browse_map_annot_view) { browse_map_annot_view = [[[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:@"BrowseMapAnnot"] autorelease]; NSLog(@"Creating new annotation view"); } else { NSLog(@"Recycling annotation view"); browse_map_annot_view.annotation = annotation; } ... As soon as the view is displayed, I get 2009-08-05 13:12:03.332 xxx[24308:20b] Request for annotation view 2009-08-05 13:12:03.333 xxx[24308:20b] Creating new annotation view 2009-08-05 13:12:03.333 xxx[24308:20b] Request for annotation view 2009-08-05 13:12:03.333 xxx[24308:20b] Creating new annotation view and on and on, for every annotation (~60) I've added. The map (correctly) only displays the two annotations in the current rect. I am setting the region in viewDidLoad: if (center_point.latitude == 0) { center_point.latitude = 35.785098; center_point.longitude = -78.669899; } if (map_span.latitudeDelta == 0) { map_span.latitudeDelta = .001; map_span.longitudeDelta = .001; } map_region.center = center_point; map_region.span = map_span; NSLog(@"Setting initial map center and region"); [browse_map_view setRegion:map_region animated:NO]; The log entry for the region being set is printed to the console before any annotation views are requested. The problem here is that since all of the annotations are being requested at once, [mapView dequeueReusableAnnotationViewWithIdentifier] does nothing, since there are unique MKAnnotationViews for every annotation on the map. This is leading to memory problems for me. One possible issue is that these annotations are clustered in a pretty small space (~1 mile radius). Although the map is zoomed in pretty tight in viewDidLoad (latitude and longitude delta .001), it still loads all of the annotation views at once. Thanks...

    Read the article

< Previous Page | 21 22 23 24 25 26 27 28 29 30 31 32  | Next Page >