Search Results

Search found 18545 results on 742 pages for 'put'.

Page 37/742 | < Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >

  • How does Google Analytics aggregate the Count of Visits (Frequency & Recency Report)?

    - by Brian Dant
    Here's my simple understanding of Count of Visits: Each person that comes to my site gets one "count" for each visit. They are put into a bucket of people with the same number of total counts -- if you visit twice, you are in the two bucket, if you visit six times, you are in the six bucket. From there, a report (Frequency & Recency) makes a line for each bucket and reaches into the bucket and totals the number of people in that bucket, putting that total in the second column. My Question: Will a two month report automatically put someone into two buckets, and put them on two separate lines in the Count of Visits table? This explaination makes it seem like a two-month long report will put the same person into a bucket twice, one bucket for each month. The two-month report will then show that person's visits on two different lines, instead of aggregating them. Example for Clarification: Bob comes to my site three times in January and seven times in February. I run a report for Jan 1 -- Feb 28. Will Bob be on both the Three Count line and the Seven Count line, or will he be on the Ten Count line?

    Read the article

  • Nested entities in Google App Engine. Do I do it right?

    - by Aleksandr Makov
    Trying to make most of the GAE Datastore entities concept, but some doubts drill my head. Say I have the model: class User(ndb.Model): email = ndb.StringProperty(indexed=True) password = ndb.StringProperty(indexed=False) first_name = ndb.StringProperty(indexed=False) last_name = ndb.StringProperty(indexed=False) created_at = ndb.DateTimeProperty(auto_now_add=True) @classmethod def key(cls, email): return ndb.Key(User, email) @classmethod def Add(cls, email, password, first_name, last_name): user = User(parent=cls.key(email), email=email, password=password, first_name=first_name, last_name=last_name) user.put() UserLogin.Record(email) class UserLogin(ndb.Model): time = ndb.DateTimeProperty(auto_now_add=True) @classmethod def Record(cls, user_email): login = UserLogin(parent=User.key(user_email)) login.put() And I need to keep track of times of successful login operations. Each time user logs in, an UserLogin.Record() method will be executed. Now the question — do I make it right? Thanks. EDIT 2 Ok, used the typed arguments, but then it raised this: Expected Key instance, got User(key=Key('User', 5418393301680128), created_at=datetime.datetime(2013, 6, 27, 10, 12, 25, 479928), email=u'[email protected]', first_name=u'First', last_name=u'Last', password=u'password'). It's clear to understand, but I don't get why the docs are misleading? They implicitly propose to use: # Set Employee as Address entity's parent directly... address = Address(parent=employee) But Model expects key. And what's worse the parent=user.key() swears that key() isn't callable. And I found out the user.key works. EDIT 1 After reading the example form the docs and trying to replicate it — I got type error: TypeError('Model constructor takes no positional arguments.'). This is the exacto code used: user = User('[email protected]', 'password', 'First', 'Last') user.put() stamp = UserLogin(parent=user) stamp.put() I understand that Model was given the wrong argument, BUT why it's in the docs?

    Read the article

  • REST-based file server

    - by Chris Wenham
    I need to be able to PUT files and GET them later using nothing but HTTP, so I went searching for something that might match the terms "REST file server" or "HTTP file server" or "REST drop-box", etc. Unfortunately, these terms bring up the wrong kind of results on Google. What I want is the equivalent of an SMB fileshare over HTTP. Some ideal features: Can PUT a file of any type at http://servername/service/any/path/I/want/document.pdf Anyone with access can GET that file at the URL I PUT it at Supports AV scanning on any new file that has been PUT Supports DELETE of existing resources (files) Our shop runs Windows, but I'd be interested to know about Unix software that can do this kind of thing, too. It's to be used in an IT department for private users only. It won't be on a public-facing IP address. Does anything like this exist?

    Read the article

  • Apache HTTP Server+Tomcat: Which file generates mod_jk.conf, how to modify generated stuff, and how does httpd reach it?

    - by Sk8erPeter
    I'm using XAMPP with Apache HTTP Server and Tomcat Add-On installed. There's a default mod_jk.conf which is generated by Tomcat when starting it. But which file generates this mod_jk.conf file? How can I modify default values? By default, it looks like this: pastebin - mod_jk.conf. How does Apache HTTP Server reach this file? I can't see any reference to this file when looking into httpd.conf. When I put a VirtualHost in my httpd.conf file, and I put the line JkMount /* ajp13 into it, Apache HTTP Server service can't start (causes a 7024 event id error in Event Viewer (with error code 1, but nothing specific), but puts no error messages into error.log. The VirtualHost looks like this: pastebin - VirtualHost + JkMount. This way Apache HTTP Server can not start. If I comment out the line JkMount /* ajp13, it starts without a problem. BUT if I put the following line, which is the same as in mod_jk.conf, before the mentioned VirtualHost again, the service can start! <IfModule !mod_jk.c LoadModule jk_module "C:/xampp/tomcat/xampp/apache/modules/mod_jk.so" </IfModule Why do I have to put this line in again? Why does that happen, that the http://localhost/example does work, so this query is redirected to AJP13, but I have to put the LoadModule line in again in another file? EDIT: I don't have a clue why, I surely modified something, but now /example doesn't work either... And the config above gives a 500 Internal Server Error... :S Thanks!

    Read the article

  • glTexImage2D not loading my data

    - by Clyde
    Can anyone suggest why this code doesn't work? When I draw using this texture all I get is black. If I use GLUtils.texImage2D() to load a png file, it works correctly. ByteBuffer bb = ByteBuffer.allocateDirect(128*128*4).order(ByteOrder.nativeOrder()); bb.position(0); for(int row = 0; row != 128; row++) { for(int i = 0 ; i != 128 ; i++) { bb.put((byte)0x80); bb.put((byte)0xFF); bb.put((byte)0xFF); bb.put((byte)i); } } int[] handle = new int[1]; GLES20.glEnable(GLES20.GL_TEXTURE_2D); GLES20.glGenTextures(1, handle, 0); DrawAdapter.checkGlError("Gen textures"); GLES20.glBindTexture(GLES20.GL_TEXTURE_2D, handle[0]); DrawAdapter.checkGlError("Bind textures"); bb.position(0); GLES20.glTexImage2D(GLES20.GL_TEXTURE_2D, 0, GLES20.GL_RGBA, 128, 128, 0, GLES20.GL_RGBA, GLES20.GL_UNSIGNED_BYTE, bb); DrawAdapter.checkGlError("glTexImage2D"); return handle[0];

    Read the article

  • How to connect Ethernet via a Huawei Smart AXMT880s modem (with auto-detection)?

    - by user12285
    It's been well over six months but I have the same router SmartAX MT880d with Ethernet, and the exact same problem : no internet, even though I can successfully reach the modem settings page by entering 192.168.1.1 in Firefox. I'm a total beginner with Ubuntu. My internet works great in Windows but does not work in Ubuntu. Sorry if I don't use the right (technical) terminology to explain my issue. English is not my mother tongue. For 2 weeks, I've been doing reading on the web and forums and the ubuntuguide.org to name a few, but to no avail. Now I see no other solution but to ask for help. My problem is that I can't find a way to put the right digits in the right place because I don't know what numbers I need to put in what files. E.g.: do I need to use DHCP? or a static IP address? No clue whatsoever. I'm concerned that I might put figures in the wrong spaces. For example, is the modem/router's IP exactly 192.168.1.1 for Huawei Smart AXMT880d modem? Is the subnet 255.255.255.0? Gateway 192.168.1.1? I'm confused as I can also see a different IP starting with 155131*** (is it an account number?) on my contract with Huawei (a Chinese ISP). Apart from calling 911, what other numbers do I need to put in and where? How do I check that all the numbers have been entered correctly in every appropriate space before trying to connect the Internet?

    Read the article

  • Reason why Windows can't be installed on Ubuntu , Any Help though?

    - by Terzuz
    Well I got my Windows Vista .iso file , I put it on my USB Drive , Extracted , Then I restarted and went to "Boot Device Options" Clicked "Generic USB" (What I labeled when I formatted) Then it stood on a black screen for 15 minutes , Then just went to Ubuntu , Now I found out exactly what happened Why Windows WONT work on Ubuntu is a simple reason Not if you save it in NFTS Format or MSDOS (Both Windows Formats) , Not if you use anything special , This even applies to the Ubuntu downloader , One specific reason that I just noticed , Since WINE (Windows Application Loader) can't run while you click the USB , You can't open the Setup.exe file needed to install the Operating System , That means you can not install it at all if it is a .exe installer file , Then what can we do? Only other way is to make a partition with GParted , Put a partition , then install WINE , Then run your Windows Setup.exe file , Then Wine will load it up! Why not to open Setup.exe before you make a partition - Simple , It WONT WORK! I have tried , It says you need __ Amount of space and can be put on any partition , Yeah , Any partition NOT BEING USED Here is where I need your help! - I want to make a partition but they all show as locked when I open GParted , So How am I supposed to open them up? Get a hammer and tell HP my laptop didn't want to open its partitions? Nope , I need some help on how to unlock the partitions or what to do , Thank you in advance PS- If I could , I would put a bounty to this but I do not have enough reputation!

    Read the article

  • validate weblogic security realm user through java

    - by user1877246
    i have installed weblogic '10.3.4.0' and have created a user in the default security realm 'myrealm'. The authenticator is DefaultAuthenticator. Now, I have to authenticate the user through a stand alone java application. But the application is resposning with 'LDAP: error code 49 - Invalid Credentials': CODE-START ** Properties l_props = new Properties(); LdapContext l_ctx = null; l_props.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); l_props.put(Context.PROVIDER_URL, "ldap://localhost:7001"); l_props.put(Context.SECURITY_AUTHENTICATION, "simple"); l_props.put(Context.SECURITY_PRINCIPAL, "cn=username"); l_props.put(Context.SECURITY_CREDENTIALS, "password"); l_ctx = new InitialLdapContext(l_props, null); ** CODE-END ** * ERROR-START * javax.naming.AuthenticationException: [LDAP: error code 49 - Invalid Credentials] at com.sun.jndi.ldap.LdapCtx.mapErrorCode(LdapCtx.java:3041) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2987) at com.sun.jndi.ldap.LdapCtx.processReturnCode(LdapCtx.java:2789) at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2703) at com.sun.jndi.ldap.LdapCtx.(LdapCtx.java:293) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:175) at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:193) at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:136) at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:66) at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:667) at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:288) at javax.naming.InitialContext.init(InitialContext.java:223) at javax.naming.ldap.InitialLdapContext.(InitialLdapContext.java:134) at com.iflex.fcat.misc.TestLDAP.createInitialLdapContext(TestLDAP.java:258) at com.iflex.fcat.misc.TestLDAP.authenticate(TestLDAP.java:170) at com.iflex.fcat.misc.TestLDAP.main(TestLDAP.java:125) ERROR-END

    Read the article

  • Firefox : Specifying location of bookmarks

    - by nullstein
    When adding a new bookmark with Ctrl+D in Firefox, it used to ask me where to put it in terms of bookmark directory structure. However, recently it's not even bothering to ask me and put the bookmark under the bookmark menu by default. Then I always need to open the bookmark menu and drag & drop to the right sub folder, which is quite tedious. Is there any way to make Firefox ask me where to put the bookmark everytime I add one?

    Read the article

  • Problems with this code?

    - by J4C3N-14
    I'm trying to use this code which is an example taken from here https://gist.github.com/2383248 , but it is coming up with a error on the public void onClick which is Multiple markers at this line - implements android.view.View.OnClickListener.onClick - Syntax error, insert "}" to complete MethodBody, but when I add the brace it just throws another error after many tries and fails of different suggestions and ideas. It may be a syntax error and bad coding from me (just started learning to program) but does anyone have any ideas how to resolve this or point me in the right direction I would be very grateful. public class ICSCalendarActivity extends Activity implements View.OnClickListener{ Button button1; int year1; int month1; int day1; int ShiftPattern; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); button1 = (Button)findViewById(R.id.openButton); button1.setText("open"); button1.setOnClickListener(this); Bundle extras = getIntent().getExtras(); year1 = extras.getInt("year1"); day1 = extras.getInt("day1"); month1 = extras.getInt("month1"); ShiftPattern = extras.getInt("ShiftPattern"); } public void onClick(View v){ private static void addToCalendar(Context ICSCalendarActivity, final String title, final long dtstart, final long dtend) { final ContentResolver cr = ICSCalendarActivity.getContentResolver(); Cursor cursor ; if (Integer.parseInt(Build.VERSION.SDK) >= 8 ) cursor = cr.query(Uri.parse("content://com.android.calendar/calendars"), new String[]{ "_id", "displayname" }, null, null, null); else cursor = cr.query(Uri.parse("content://calendar/calendars"), new String[]{ "_id", "displayname" }, null, null, null); if ( cursor.moveToFirst() ) { final String[] calNames = new String[cursor.getCount()]; final int[] calIds = new int[cursor.getCount()]; for (int i = 0; i < calNames.length; i++) { calIds[i] = cursor.getInt(0); calNames[i] = cursor.getString(1); cursor.moveToNext(); } AlertDialog.Builder builder = new AlertDialog.Builder(ICSCalendarActivity); builder.setSingleChoiceItems(calNames, -1, new DialogInterface.OnClickListener() { @Override public void onClick(DialogInterface dialog, int which) { ContentValues cv = new ContentValues(); cv.put("calendar_id", calIds[which]); cv.put("title", title); cv.put("dtstart", dtstart ); cv.put("hasAlarm", 1); cv.put("dtend", dtend); Uri newEvent ; if (Integer.parseInt(Build.VERSION.SDK) >= 8 ) newEvent = cr.insert(Uri.parse("content://com.android.calendar/events"), cv); else newEvent = cr.insert(Uri.parse("content://calendar/events"), cv); if (newEvent != null) { long id = Long.parseLong( newEvent.getLastPathSegment() ); ContentValues values = new ContentValues(); values.put( "event_id", id ); values.put( "method", 1 ); values.put( "minutes", 15 ); // 15 minutes if (Integer.parseInt(Build.VERSION.SDK) >= 8 ) cr.insert( Uri.parse( "content://com.android.calendar/reminders" ), values ); else cr.insert( Uri.parse( "content://calendar/reminders" ), values ); } dialog.cancel(); } }); builder.create().show(); } cursor.close(); } } Thank you.

    Read the article

  • Saving image from Gallery to db - Coursor IllegalStateException

    - by MyWay
    I want to save to db some strings with image. Image can be taken from gallery or user can set the sample one. In the other activity I have a listview which should present the rows with image and name. I'm facing so long this problem. It occurs when I wanna display listview with the image from gallery, If the sample image is saved in the row everything works ok. My problem is similar to this one: how to save image taken from camera and show it to listview - crashes with "IllegalStateException" but I can't find there the solution for me My table in db looks like this: public static final String KEY_ID = "_id"; public static final String ID_DETAILS = "INTEGER PRIMARY KEY AUTOINCREMENT"; public static final int ID_COLUMN = 0; public static final String KEY_NAME = "name"; public static final String NAME_DETAILS = "TEXT NOT NULL"; public static final int NAME_COLUMN = 1; public static final String KEY_DESCRIPTION = "description"; public static final String DESCRIPTION_DETAILS = "TEXT"; public static final int DESCRIPTION_COLUMN = 2; public static final String KEY_IMAGE ="image" ; public static final String IMAGE_DETAILS = "BLOP"; public static final int IMAGE_COLUMN = 3; //method which create our table private static final String CREATE_PRODUCTLIST_IN_DB = "CREATE TABLE " + DB_TABLE + "( " + KEY_ID + " " + ID_DETAILS + ", " + KEY_NAME + " " + NAME_DETAILS + ", " + KEY_DESCRIPTION + " " + DESCRIPTION_DETAILS + ", " + KEY_IMAGE +" " + IMAGE_DETAILS + ");"; inserting statement: public long insertToProductList(String name, String description, byte[] image) { ContentValues value = new ContentValues(); // get the id of column and value value.put(KEY_NAME, name); value.put(KEY_DESCRIPTION, description); value.put(KEY_IMAGE, image); // put into db return db.insert(DB_TABLE, null, value); } Button which add the picture and onActivityResult method which saves the image and put it into the imageview public void AddPicture(View v) { // creating specified intent which have to get data Intent intent = new Intent(Intent.ACTION_PICK); // From where we want choose our pictures intent.setType("image/*"); startActivityForResult(intent, PICK_IMAGE); } @Override protected void onActivityResult(int requestCode, int resultCode, Intent data) { // TODO Auto-generated method stub super.onActivityResult(requestCode, resultCode, data); // if identification code match to the intent, //if yes we know that is our picture, if(requestCode ==PICK_IMAGE ) { // check if the data comes with intent if(data!= null) { Uri chosenImage = data.getData(); String[] filePathColumn = {MediaStore.Images.Media.DATA}; Cursor cursor = getContentResolver().query(chosenImage, filePathColumn, null, null, null); cursor.moveToFirst(); int columnIndex = cursor.getColumnIndex(filePathColumn[0]); String filePat = cursor.getString(columnIndex); cursor.close(); ImageOfProduct = BitmapFactory.decodeFile(filePat); if(ImageOfProduct!=null) { picture.setImageBitmap(ImageOfProduct); } messageDisplayer("got picture, isn't null " + IdOfPicture); } } } Then the code which converts bitmap to byte[] public byte[] bitmapToByteConvert(Bitmap bit ) { // stream of data getted for compressed bitmap ByteArrayOutputStream gettedData = new ByteArrayOutputStream(); // compressing method bit.compress(CompressFormat.PNG, 0, gettedData); // our byte array return gettedData.toByteArray(); } The method which put data to the row: byte[] image=null; // if the name isn't put to the editView if(name.getText().toString().trim().length()== 0) { messageDisplayer("At least you need to type name of product if you want add it to the DB "); } else{ String desc = description.getText().toString(); if(description.getText().toString().trim().length()==0) { messageDisplayer("the description is set as none"); desc = "none"; } DB.open(); if(ImageOfProduct!= null){ image = bitmapToByteConvert(ImageOfProduct); messageDisplayer("image isn't null"); } else { BitmapDrawable drawable = (BitmapDrawable) picture.getDrawable(); image = bitmapToByteConvert(drawable.getBitmap()); } if(image.length>0 && image!=null) { messageDisplayer(Integer.toString(image.length)); } DB.insertToProductList(name.getText().toString(), desc, image ); DB.close(); messageDisplayer("well done you add the product"); finish(); You can see that I'm checking here the length of array to be sure that I don't send empty one. And here is the place where Error appears imo, this code is from activity which presents the listview with data taken from db private void LoadOurLayoutListWithInfo() { // firstly wee need to open connection with db db= new sqliteDB(getApplicationContext()); db.open(); // creating our custom adaprer, the specification of it will be typed // in our own class (MyArrayAdapter) which will be created below ArrayAdapter<ProductFromTable> customAdapter = new MyArrayAdapter(); //get the info from whole table tablecursor = db.getAllColumns(); if(tablecursor != null) { startManagingCursor(tablecursor); tablecursor.moveToFirst(); } // now we moving all info from tablecursor to ourlist if(tablecursor != null && tablecursor.moveToFirst()) { do{ // taking info from row in table int id = tablecursor.getInt(sqliteDB.ID_COLUMN); String name= tablecursor.getString(sqliteDB.NAME_COLUMN); String description= tablecursor.getString(sqliteDB.DESCRIPTION_COLUMN); byte[] image= tablecursor.getBlob(3); tablefromDB.add(new ProductFromTable(id,name,description,image)); // moving until we didn't find last row }while(tablecursor.moveToNext()); } listView = (ListView) findViewById(R.id.tagwriter_listoftags); //as description says // setAdapter = The ListAdapter which is responsible for maintaining //the data backing this list and for producing a view to represent //an item in that data set. listView.setAdapter(customAdapter); } I put the info from row tho objects which are stored in list. I read tones of question but I can't find any solution for me. Everything works when I put the sample image ( which is stored in app res folder ). Thx for any advice

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How can you learn to design nice looking websites?

    - by Richard
    I am a moderately capable web developer. I can put stuff where I want it to go and put some JQuery stuff in there if I need to. However, if I am making my own website (which I am starting to do) I have no idea how to design it. If someone was to sit next to me a point to the screen and say "put this picture there, text there" I can do that quite easily. But designing my own site with my choice of colours and text will look like a toddler has invented it. Does anyone know any websites/books I can look at or has anyone got any tips on the basics of non-toddler web design?

    Read the article

  • Nginx Browser Caching using HTTP Headers outside server/location block

    - by Danny O'Sullivan
    I am having difficulty setting the HTTP expires headers for Nginx outside of specific server (and then location) blocks. What I want is to something like the following: location ~* \.(png|jpg|jpeg|gif|ico)$ { expires 1y; } But not have to repeat it in every single server block, because I am hosting a large number of sites. I can put it in every server block, but it's not very DRY. If I try to put that into an HTTP block or outside of all other blocks, I get "location directive is not allowed here." It seems I have to put it into a server block, and I have a different server block for every virtual host. Any help/clarification would be appreciated.

    Read the article

  • Can't find Localhost files

    - by GMF
    Hope you can help. This is my first time trying Ubuntu/Linux. I am logged in as root I have downloaded and installed LAMP and PHPMYADMIN. I get the test page under localhost say that It works and is installed Correctly. I have also put my files in the /var/www. they are PHP files When I put the address localhost/(page name.php) I get an error saying Not Found The requested URL /index.php was not found on this server. Apache/2.2.22 (Ubuntu) Server at localhost Port 80 I Have put the files in the wrong folder?? If I look in the "/etc/ap[ache2/sites-available/default", It tells my my DocumentRoot is /var/www Would love somehelp on this please Many thanks GF

    Read the article

  • Designing for mobile (aka designing for everything)

    - by ihaynes
    Last Saturday I went to 'Developer Developer Developer 10' on the Microsoft campus at Reading (UK). This is one of a series of regular events put on by the developer community for the developer community. The guys who organise these events put in a huge amount of time and effort into them and they are well worth getting to if you can.I enjoy these events because there's always something currently relevant but also I can get an insight into things I don't normally come across or work with.Having said that, it's web related things that always grab my attention and this year one of my favourite speakers, George Adamson, gave a session on 'Designing for mobile (aka designing for everything)'. This is a subject close to my heart and I've tried to put the argument forward myself on http://www.ew-resource.co.uk/mobile/ but George makes a far better job of it that I can.His slideshow from the session is available on http://www.slideshare.net/george.adamson and although you won't get his unique presentation style in the static slideshow, this is well worth watching if you have the time.

    Read the article

  • How best do you represent a bi-directional sync in a REST api?

    - by Edward M Smith
    Assuming a system where there's a Web Application with a resource, and a reference to a remote application with another similar resource, how do you represent a bi-directional sync action which synchronizes the 'local' resource with the 'remote' resource? Example: I have an API that represents a todo list. GET/POST/PUT/DELETE /todos/, etc. That API can reference remote TODO services. GET/POST/PUT/DELETE /todo_services/, etc. I can manipulate todos from the remote service through my API as a proxy via GET/POST/PUT/DELETE /todo_services/abc123/, etc. I want the ability to do a bi-directional sync between a local set of todos and the remote set of TODOS. In a rpc sort of way, one could do POST /todo_services/abc123/sync/ But, in the "verbs are bad" idea, is there a better way to represent this action?

    Read the article

  • Help with creating symbolic link

    - by user1737794
    I'm a little bit confused with how the symbolic links work. I hope someone can guide me in the right direction. I want to put a demo online from our software, which normally only runs locally on a Mac Mini. So I put all the files in the var/www from my Ubuntu 12.04 server installation. There are a lot of hardcoded links in the software which point to "/Applications/XAMPP/xamppfiles/htdocs/narrowcasting" Of course I can change all these code on my html/php files in /var/www, but that would be quite annoying. I hope I can fix this by creating a symbolic link. For example I have a directory called thumb in /var/www/thumb. The php code is trying to put a image in /Applications/XAMPP/xamppfiles/htdocs/narrowcasting/thumb. Can anyone give me a tip how to achieve this with a symbolic link? Thanks in advance.

    Read the article

  • What tools to use for efficient link building?

    - by Evgeny
    As most SEO experts keep saying, it is not just the content that you have - but also a hefty amount of quality incoming links to your content that is important -- these are the two ways to get to the top of the search results. The question is where do I find the incomnig links? One way I know is Google Blog Search, it can be used to find blogs with related information to your content and some allow to leave comments. The comments usually consist of your name, e-mail and website. If you put your keyword instead of your name, then the keyword turns into a link to your website. Unfortunately most blogs put rel=nofollow on such links, but some blogs don't do that. What other ways are there to find quality pages to put keywords links back to your website? Quality link usually means: located on a page with relevant content does not have a rel=nofollow in the <a has a relevant keyword as in <a href=websitekeyword</a the page with the link has high PageRank (3+) and TrustRank

    Read the article

  • How do I link external style sheet to multiple pages and folders?

    - by user18681
    im building a pretty large website that will have many pages and folders. I have 1 stylesheet. How do I add the style sheet to "ALL" of these folders? I didnt have this problem before I started to put the pages in SEPERATE folders. Now that each page has its own folder it no longer reads my stylesheet unless its in the SAME folder Example, lets say I have a folder of pets, another of cars, and another of planes. I have to put my stylesheet in EACH and everyone of these folders so that I can see my site. How can I do it so that I do NOT have to put a stylesheet in each and every folder? In other words how can I get my stylesheets on the same page as my folders without having them in that folder? How can I get them to communicate while being in a different folder?

    Read the article

  • Do you think code is self documenting?

    - by Desolate Planet
    This is a question that was put to me many years ago as a gradute in a job interview and it's kind of picked at my brain now and again and I've never really found a good answer that satisfies me. The interviewer in question was looking for a black and white answer, there was no middle ground. I never got the chance to ask about the rationale behind the question, but I'm curious why that question would be put to a developer and what you would learn from a yes or no answer? From my own point of view, I can read Java, Python, Delphi etc, but if my manager comes up to me and asks me how far along in a project I am and I say "The code is 80% complete" (and before you start shooting me down, I've heard this uttered in a couple of offices by developers), how exactly is that self documenting? Apologies if this question seems strange, but I'd rather ask and get some opinions on it to gain a better understanding of why it would be put to someone in an interview.

    Read the article

  • Quantitfying a cost for a software project

    - by The Elite Gentleman
    Disclaimer: I didn't know exactly where to put this question. If you feel that this question is not suitable for Programmers @ StackExchange, feel free to migrate it. Background: Broadening my last question, there is a request for tender for a software system that's open and I have decided to take it on. I am a software developer & engineer by profession and, in this tender process, I have to put on the pricing for my bid. I have been provided a documentation consisting of functional and non-functional requirements only. I have to put a project manager's cap on and think of all aspects, e.g. cost for implementation for the project, resources needed, etc. My question is: Is there a project framework that I can follow that breaks the project cycle into steps and corresponding cost aspect or how would I go about best calculating/approximating the cost for the project?

    Read the article

  • Meta Description or Title For Post Contents

    - by Raj
    I have a site that has posts without titles. You can think of them as being a lot like Twitter tweets. Should I put the post contents in the meta title tag or the description tag? If I put the post contents in one of the tags what should I put in the other? My challenge is that we have very short amounts of content with no titles. I want to avoid having too many duplicate titles or descriptions. We have things like user name, full name, date, etc.

    Read the article

  • Getting Started with Amazon Web Services in NetBeans IDE

    - by Geertjan
    When you need to connect to Amazon Web Services, NetBeans IDE gives you a nice start. You can drag and drop the "itemSearch" service into a Java source file and then various Amazon files are generated for you. From there, you need to do a little bit of work because the request to Amazon needs to be signed before it can be used. Here are some references and places that got me started: http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html https://affiliate-program.amazon.com/gp/flex/advertising/api/sign-in.html You definitely need to sign up to the Amazon Associates program and also register/create an Access Key ID, which will also get you a Secret Key, as well. Here's a simple Main class that I created that hooks into the generated RestConnection/RestResponse code created by NetBeans IDE: public static void main(String[] args) {    try {        String searchIndex = "Books";        String keywords = "Romeo and Juliet";        RestResponse result = AmazonAssociatesService.itemSearch(searchIndex, keywords);        String dataAsString = result.getDataAsString();        int start = dataAsString.indexOf("<Author>")+8;        int end = dataAsString.indexOf("</Author>");        System.out.println(dataAsString.substring(start,end));    } catch (Exception ex) {        ex.printStackTrace();    }} Then I deleted the generated properties file and the authenticator and changed the generated AmazonAssociatesService.java file to the following: public class AmazonAssociatesService {    private static void sleep(long millis) {        try {            Thread.sleep(millis);        } catch (Throwable th) {        }    }    public static RestResponse itemSearch(String searchIndex, String keywords) throws IOException {        SignedRequestsHelper helper;        RestConnection conn = null;        Map queryMap = new HashMap();        queryMap.put("Service", "AWSECommerceService");        queryMap.put("AssociateTag", "myAssociateTag");        queryMap.put("AWSAccessKeyId", "myAccessKeyId");        queryMap.put("Operation", "ItemSearch");        queryMap.put("SearchIndex", searchIndex);        queryMap.put("Keywords", keywords);        try {            helper = SignedRequestsHelper.getInstance(                    "ecs.amazonaws.com",                    "myAccessKeyId",                    "mySecretKey");            String sign = helper.sign(queryMap);            conn = new RestConnection(sign);        } catch (IllegalArgumentException | UnsupportedEncodingException | NoSuchAlgorithmException | InvalidKeyException ex) {        }        sleep(1000);        return conn.get(null);    }} Finally, I copied this class into my application, which you can see is referred to above: http://code.google.com/p/amazon-product-advertising-api-sample/source/browse/src/com/amazon/advertising/api/sample/SignedRequestsHelper.java Here's the completed app, mostly generated via the drag/drop shown at the start, but slightly edited as shown above: That's all, now everything works as you'd expect.

    Read the article

  • Do you think code is self documenting?

    - by Desolate Planet
    This is a question that was put to me many years ago as a gradute in a job interview and it's nagged at my brain now and again and I've never really found a good answer that satisfied me. The interviewer in question was looking for a black and white answer, there was no middle ground. I never got the chance to ask about the rationale behind the question, but I'm curious why that question would be put to a developer and what you would learn from a yes or no answer? From my own point of view, I can read Java, Python, Delphi etc, but if my manager comes up to me and asks me how far along in a project I am and I say "The code is 80% complete" (and before you start shooting me down, I've heard this uttered in a couple of offices by developers), how exactly is that self documenting? Apologies if this question seems strange, but I'd rather ask and get some opinions on it to gain a better understanding of why it would be put to someone in an interview.

    Read the article

< Previous Page | 33 34 35 36 37 38 39 40 41 42 43 44  | Next Page >