Search Results

Search found 4119 results on 165 pages for 'bebe 17'.

Page 13/165 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Postfix Problem (helo/hostname mismatch)!

    - by CuSS
    Hi all, I have a server, and it is running a error for one email only (all other mails in that domain are working). How can i fix it? (The error is above:) May 17 11:43:56 webserver postfix/policyd-weight[5596]: weighted check: IN_DYN_PBL_SPAMHAUS=3.25 NOT_IN_SBL_XBL_SPAMHAUS=-1.5 NOT_IN_SPAMCOP=-1.5 NOT_IN_BL_NJABL=-1.5 DSBL_ORG=ERR(0) CL_IP_NE_HELO=4.75 RESOLVED_IP_IS_NOT_HELO=1.5 HELO_NUMERIC=10.625 (check from: .eticagest. - helo: .[10.0.0.17]. - helo-domain: .17].) FROM_NOT_FAILED_HELO(DOMAIN)=6.25; <client=188.80.139.211> <helo=[10.0.0.17]> <[email protected]> <[email protected]>; rate: 21.875 May 17 11:43:56 webserver postfix/policyd-weight[5596]: decided action=550 Mail appeared to be SPAM or forged. Ask your Mail/DNS-Administrator to correct HELO and DNS MX settings or to get removed from DNSBLs; MTA helo: [10.0.0.17], MTA hostname: bl15-139-211.dsl.telepac.pt[188.80.139.211] (helo/hostname mismatch); <client=188.80.139.211> <helo=[10.0.0.17]> <[email protected]> <[email protected]>; delay: 6s

    Read the article

  • FreeBSD slow transfers - RFC 1323 scaling issue?

    - by Trey
    I think I may be having an issue with window scaling (RFC 1323) and am hoping that someone can enlighten me on what's going on. Server: FreeBSD 9, apache22, serving a static 100MB zip file. 192.168.18.30 Client: Mac OS X 10.6, Firefox 192.168.17.47 Network: Only a switch between them - the subnet is 192.168.16/22 (In this test, I also have dummynet filtering simulating an 80ms ping time on all IP traffic. I've seen nearly identical traces with a "real" setup, with real internet traffic/latency also) Questions: Does this look normal? Is packet #2 specifying a window size of 65535 and a scale of 512? Is packet #5 then shrinking the window size so it can use the 512 scale and still keep the overall calculated window size near 64K? Why is the window scale so high? Here are the first 6 packets from wireshark. For packets 5 and 6 I've included the details showing the window size and scaling factor being used for the data transfer. Code: No. Time Source Destination Protocol Length Info 108 6.699922 192.168.17.47 192.168.18.30 TCP 78 49190 http [SYN] Seq=0 Win=65535 Len=0 MSS=1460 WS=8 TSval=945617489 TSecr=0 SACK_PERM=1 115 6.781971 192.168.18.30 192.168.17.47 TCP 74 http 49190 [SYN, ACK] Seq=0 Ack=1 Win=65535 Len=0 MSS=1460 WS=512 SACK_PERM=1 TSval=2617517338 TSecr=945617489 116 6.782218 192.168.17.47 192.168.18.30 TCP 66 49190 http [ACK] Seq=1 Ack=1 Win=524280 Len=0 TSval=945617490 TSecr=2617517338 117 6.782220 192.168.17.47 192.168.18.30 HTTP 490 GET /utils/speedtest/large.file.zip HTTP/1.1 118 6.867070 192.168.18.30 192.168.17.47 TCP 375 [TCP segment of a reassembled PDU] Details: Transmission Control Protocol, Src Port: http (80), Dst Port: 49190 (49190), Seq: 1, Ack: 425, Len: 309 Source port: http (80) Destination port: 49190 (49190) [Stream index: 4] Sequence number: 1 (relative sequence number) [Next sequence number: 310 (relative sequence number)] Acknowledgement number: 425 (relative ack number) Header length: 32 bytes Flags: 0x018 (PSH, ACK) Window size value: 130 [Calculated window size: 66560] [Window size scaling factor: 512] Checksum: 0xd182 [validation disabled] Options: (12 bytes) No-Operation (NOP) No-Operation (NOP) Timestamps: TSval 2617517423, TSecr 945617490 [SEQ/ACK analysis] TCP segment data (309 bytes) Note: originally posted http://forums.freebsd.org/showthread.php?t=32552

    Read the article

  • procmail don't execute php script

    - by Phliplip
    Hi, I have setup a kannel SMS gateway on my FreeBSD 7.2 - the service works great. I'm now trying to setup a email2sms feature. For this i have created a system user called kannel and all mails are forwarded to this user. In the home dir of kannel i have the following files. -rw-r--r-- 1 kannel kannel 81B 17 jan 09:50 .procmailrc lrwxr-x--- 1 root kannel 58B 14 jan 13:24 email2sms.php @ -> some-what-some-where -rw-rw-rw- 1 root kannel 5,8K 17 jan 09:52 log.email2sms -rw------- 1 kannel kannel 1,3K 17 jan 09:50 procmail.log -rw-r----- 1 root kannel 606B 14 jan 13:28 rawmail.txt The file email2sms.php is a symlink to the a php script (ZendFramework Application) that takes the email from STDIN, and uses ZendFramework to parse that mail into an object. It then do a http request to the SMS gateway. The php-script works. Content of .procmailrc LOGFILE=$HOME/procmail.log VERBOSE=yes :0 | php email2sms.php >> log.email2sms From last sent email i have this in procmail.log procmail: [97744] Mon Jan 17 09:50:40 2011 procmail: [97744] Mon Jan 17 09:50:40 2011 procmail: Assigning "LASTFOLDER= php email2sms.php >> log.email2sms" procmail: Executing " php email2sms.php >> log.email2sms" procmail: Notified comsat: "kannel@:/home/user/kannel/ php email2sms.php >> log.email2sms" From [email protected] Mon Jan 17 09:50:40 2011 Subject: asdf as Folder: php email2sms.php >> log.email2sms 2600 But there is no new output to log.email2sms, and the script should output the subject of the email. If i sudo as the kannel user and pipe a file with raw email to the script, it executes just fine. [root@webserver /home/user/kannel]# /home/user/kannel/ sudo -u kannel cat rawmail.txt | php email2sms.php >> log.email2sms And the command outputs to log.email2sms as desired. Any ideas guys?

    Read the article

  • Host couldn't be reached by domain name, only by IP: Apache's fault?

    - by MaxArt
    I have this Windows Server 2003 R2 32 bit machine running Apache 2.4.2 with OpenSSL 1.0.1c and PHP 5.4.5 via mod_fcgid 2.3.7. This config worked just fine for some hours, but then the site couldn't be reached with its domain name, say www.example.com, but it could be still reached by its IP address. In particular, while https://www.example.com/ yielded a connection error, http://123.1.2.3/ worked just fine. Yes, first https then http. Error and access logs were clean, i.e. they showed no signs of problems. Just the usual messages, that were interrupted while the site couldn't be reached. After some investigation, a simple restart of Apache solved the problem. Unfortunately, I didn't have the chance to test if https://123.1.2.3/ worked as well, or if http://www.example.com/ was still redirected to https as usual. So, has anyone have any idea of what happened? Before I get tired of Apache and ditch it in favor of Nginx? Edit: Some log informations. The last line of sslerror.log is from 90 minutes before the problem occurred, so I guess it's not important. ssl_request.log shows nothing interesting, too: these are the last two lines before the problem: [28/Aug/2012:17:47:54 +0200] x.x.x.x TLSv1.1 ECDHE-RSA-AES256-SHA "GET /login HTTP/1.1" 1183 [28/Aug/2012:17:47:45 +0200] y.y.y.y TLSv1 ECDHE-RSA-AES256-SHA "POST /upf HTTP/1.1" 73 The previous lines are all the same and don't seem interesting, except 4 lines like these 30-40 seconds before the problem: [28/Aug/2012:17:47:14 +0200] z.z.z.z TLSv1 ECDHE-RSA-AES256-SHA "-" - These are the corrisponding lines from sslaccess.log: z.z.z.z - - [28/Aug/2012:17:47:14 +0200] "-" 408 - ... x.x.x.x - - [28/Aug/2012:17:47:54 +0200] "GET /login HTTP/1.1" 200 1183 y.y.y.y - - [28/Aug/2012:17:47:45 +0200] "POST /upf HTTP/1.1" 200 73

    Read the article

  • Passing data between android ListActivities in Java

    - by Will Janes
    I am new to Android! I am having a problem getting this code to work... Basically I Go from one list activity to another and pass the text from a list item through the intent of the activity to the new list view, then retrieve that text in the new list activity and then preform a http request based on value of that list item. Log Cat 04-05 17:47:32.370: E/AndroidRuntime(30135): FATAL EXCEPTION: main 04-05 17:47:32.370: E/AndroidRuntime(30135): java.lang.ClassCastException:android.widget.LinearLayout 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.thickcrustdesigns.ufood.CatogPage$1.onItemClick(CatogPage.java:66) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.AdapterView.performItemClick(AdapterView.java:284) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.ListView.performItemClick(ListView.java:3731) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.widget.AbsListView$PerformClick.run(AbsListView.java:1959) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Handler.handleCallback(Handler.java:587) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Handler.dispatchMessage(Handler.java:92) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.os.Looper.loop(Looper.java:130) 04-05 17:47:32.370: E/AndroidRuntime(30135): at android.app.ActivityThread.main(ActivityThread.java:3691) 04-05 17:47:32.370: E/AndroidRuntime(30135): at java.lang.reflect.Method.invokeNative(Native Method) 04-05 17:47:32.370: E/AndroidRuntime(30135): at java.lang.reflect.Method.invoke(Method.java:507) 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:907) 04-05 17:47:32.370: E/AndroidRuntime(30135): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:665) 04-05 17:47:32.370: E/AndroidRuntime(30135): at dalvik.system.NativeStart.main(Native Method) ListActivity 1 package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.AdapterView; import android.widget.AdapterView.OnItemClickListener; import android.widget.Button; import android.widget.ListView; import android.widget.TextView; public class CatogPage extends ListActivity { ListView listView1; Button btn_bk; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.definition_main); btn_bk = (Button) findViewById(R.id.btn_bk); listView1 = (ListView) findViewById(android.R.id.list); ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "categories")); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); ListView lv = getListView(); lv.setOnItemClickListener(new OnItemClickListener() { @Override public void onItemClick(AdapterView<?> parent, View view, int position, long id) { TextView tv = (TextView) view; String p = tv.getText().toString(); Intent i = new Intent(getApplicationContext(), Results.class); i.putExtra("category", p); startActivity(i); } }); btn_bk.setOnClickListener(new View.OnClickListener() { public void onClick(View arg0) { Intent i = new Intent(getApplicationContext(), UFoodAppActivity.class); startActivity(i); } }); } } **ListActivity 2** package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.os.Bundle; import android.widget.ListView; public class Results extends ListActivity { ListView listView1; enum Category { Chicken, Beef, Chinese, Cocktails, Curry, Deserts, Fish, ForOne { public String toString() { return "For One"; } }, Lamb, LightBites { public String toString() { return "Light Bites"; } }, Pasta, Pork, Vegetarian } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setContentView(R.layout.definition_main); listView1 = (ListView) findViewById(android.R.id.list); Bundle data = getIntent().getExtras(); String category = data.getString("category"); Category cat = Category.valueOf(category); String value = null; switch (cat) { case Chicken: value = "Chicken"; break; case Beef: value = "Beef"; break; case Chinese: value = "Chinese"; break; case Cocktails: value = "Cocktails"; break; case Curry: value = "Curry"; break; case Deserts: value = "Deserts"; break; case Fish: value = "Fish"; break; case ForOne: value = "ForOne"; break; case Lamb: value = "Lamb"; break; case LightBites: value = "LightBites"; break; case Pasta: value = "Pasta"; break; case Pork: value = "Pork"; break; case Vegetarian: value = "Vegetarian"; } ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "category")); nvp.add(new BasicNameValuePair("cat", value)); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); } } Request package com.thickcrustdesigns.ufood; import java.io.BufferedReader; import java.io.InputStream; import java.io.InputStreamReader; import java.util.ArrayList; import org.apache.http.HttpEntity; import org.apache.http.HttpResponse; import org.apache.http.NameValuePair; import org.apache.http.client.HttpClient; import org.apache.http.client.entity.UrlEncodedFormEntity; import org.apache.http.client.methods.HttpPost; import org.apache.http.impl.client.DefaultHttpClient; import org.json.JSONArray; import org.json.JSONObject; import android.content.Context; import android.util.Log; import android.widget.Toast; public class Request { @SuppressWarnings("null") public static ArrayList<JSONObject> fetchData(Context context, ArrayList<NameValuePair> nvp) { ArrayList<JSONObject> listItems = new ArrayList<JSONObject>(); InputStream is = null; try { HttpClient httpclient = new DefaultHttpClient(); HttpPost httppost = new HttpPost( "http://co350-11d.projects02.glos.ac.uk/php/database.php"); httppost.setEntity(new UrlEncodedFormEntity(nvp)); HttpResponse response = httpclient.execute(httppost); HttpEntity entity = response.getEntity(); is = entity.getContent(); } catch (Exception e) { Log.e("log_tag", "Error in http connection" + e.toString()); } // convert response to string String result = ""; try { BufferedReader reader = new BufferedReader(new InputStreamReader( is, "iso-8859-1"), 8); InputStream stream = null; StringBuilder sb = null; while ((result = reader.readLine()) != null) { sb.append(result + "\n"); } stream.close(); result = sb.toString(); } catch (Exception e) { Log.e("log_tag", "Error converting result " + e.toString()); } try { JSONArray jArray = new JSONArray(result); for (int i = 0; i < jArray.length(); i++) { JSONObject jo = jArray.getJSONObject(i); listItems.add(jo); } } catch (Exception e) { Toast.makeText(context.getApplicationContext(), "None Found!", Toast.LENGTH_LONG).show(); } return listItems; } } Any help would be grateful! Many Thanks EDIT Sorry very tired so missed out my 2nd ListActivity package com.thickcrustdesigns.ufood; import java.util.ArrayList; import org.apache.http.NameValuePair; import org.apache.http.message.BasicNameValuePair; import org.json.JSONException; import org.json.JSONObject; import android.app.ListActivity; import android.os.Bundle; import android.widget.ListView; public class Results extends ListActivity { ListView listView1; enum Category { Chicken, Beef, Chinese, Cocktails, Curry, Deserts, Fish, ForOne { public String toString() { return "For One"; } }, Lamb, LightBites { public String toString() { return "Light Bites"; } }, Pasta, Pork, Vegetarian } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setContentView(R.layout.definition_main); listView1 = (ListView) findViewById(android.R.id.list); Bundle data = getIntent().getExtras(); String category = data.getString("category"); Category cat = Category.valueOf(category); String value = null; switch (cat) { case Chicken: value = "Chicken"; break; case Beef: value = "Beef"; break; case Chinese: value = "Chinese"; break; case Cocktails: value = "Cocktails"; break; case Curry: value = "Curry"; break; case Deserts: value = "Deserts"; break; case Fish: value = "Fish"; break; case ForOne: value = "ForOne"; break; case Lamb: value = "Lamb"; break; case LightBites: value = "LightBites"; break; case Pasta: value = "Pasta"; break; case Pork: value = "Pork"; break; case Vegetarian: value = "Vegetarian"; } ArrayList<NameValuePair> nvp = new ArrayList<NameValuePair>(); nvp.add(new BasicNameValuePair("request", "category")); nvp.add(new BasicNameValuePair("cat", value)); ArrayList<JSONObject> jsondefs = Request.fetchData(this, nvp); String[] defs = new String[jsondefs.size()]; for (int i = 0; i < jsondefs.size(); i++) { try { defs[i] = jsondefs.get(i).getString("Name"); } catch (JSONException e) { // TODO Auto-generated catch block e.printStackTrace(); } } uFoodAdapter adapter = new uFoodAdapter(this, R.layout.definition_list, defs); listView1.setAdapter(adapter); } } Sorry again! Cheers guys!

    Read the article

  • dbus dependency with yum

    - by Hengjie
    Whenever, I try and run yum update I get the following error: [root@server ~]# yum update Loaded plugins: dellsysid, fastestmirror Loading mirror speeds from cached hostfile * base: mirror01.idc.hinet.net * extras: mirror01.idc.hinet.net * rpmforge: fr2.rpmfind.net * updates: mirror01.idc.hinet.net Excluding Packages in global exclude list Finished Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package NetworkManager.x86_64 1:0.7.0-13.el5 set to be updated ---> Package NetworkManager-glib.x86_64 1:0.7.0-13.el5 set to be updated ---> Package SysVinit.x86_64 0:2.86-17.el5 set to be updated ---> Package acl.x86_64 0:2.2.39-8.el5 set to be updated ---> Package acpid.x86_64 0:1.0.4-12.el5 set to be updated ---> Package apr.x86_64 0:1.2.7-11.el5_6.5 set to be updated ---> Package aspell.x86_64 12:0.60.3-12 set to be updated ---> Package audit.x86_64 0:1.8-2.el5 set to be updated ---> Package audit-libs.x86_64 0:1.8-2.el5 set to be updated ---> Package audit-libs-python.x86_64 0:1.8-2.el5 set to be updated ---> Package authconfig.x86_64 0:5.3.21-7.el5 set to be updated ---> Package autofs.x86_64 1:5.0.1-0.rc2.163.el5 set to be updated ---> Package bash.x86_64 0:3.2-32.el5 set to be updated ---> Package bind.x86_64 30:9.3.6-20.P1.el5 set to be updated ---> Package bind-libs.x86_64 30:9.3.6-20.P1.el5 set to be updated ---> Package bind-utils.x86_64 30:9.3.6-20.P1.el5 set to be updated ---> Package binutils.x86_64 0:2.17.50.0.6-20.el5 set to be updated ---> Package centos-release.x86_64 10:5-8.el5.centos set to be updated ---> Package centos-release-notes.x86_64 0:5.8-0 set to be updated ---> Package coreutils.x86_64 0:5.97-34.el5_8.1 set to be updated ---> Package cpp.x86_64 0:4.1.2-52.el5 set to be updated ---> Package cpuspeed.x86_64 1:1.2.1-10.el5 set to be updated ---> Package crash.x86_64 0:5.1.8-1.el5.centos set to be updated ---> Package cryptsetup-luks.x86_64 0:1.0.3-8.el5 set to be updated ---> Package cups.x86_64 1:1.3.7-30.el5 set to be updated ---> Package cups-libs.x86_64 1:1.3.7-30.el5 set to be updated ---> Package curl.x86_64 0:7.15.5-15.el5 set to be updated --> Processing Dependency: dbus = 1.1.2-15.el5_6 for package: dbus-libs ---> Package dbus.x86_64 0:1.1.2-16.el5_7 set to be updated ---> Package dbus-libs.x86_64 0:1.1.2-16.el5_7 set to be updated ---> Package device-mapper.x86_64 0:1.02.67-2.el5 set to be updated ---> Package device-mapper-event.x86_64 0:1.02.67-2.el5 set to be updated ---> Package device-mapper-multipath.x86_64 0:0.4.7-48.el5_8.1 set to be updated ---> Package dhclient.x86_64 12:3.0.5-31.el5 set to be updated ---> Package dmidecode.x86_64 1:2.11-1.el5 set to be updated ---> Package dmraid.x86_64 0:1.0.0.rc13-65.el5 set to be updated ---> Package dmraid-events.x86_64 0:1.0.0.rc13-65.el5 set to be updated ---> Package dump.x86_64 0:0.4b41-6.el5 set to be updated ---> Package e2fsprogs.x86_64 0:1.39-33.el5 set to be updated ---> Package e2fsprogs-devel.x86_64 0:1.39-33.el5 set to be updated ---> Package e2fsprogs-libs.x86_64 0:1.39-33.el5 set to be updated ---> Package ecryptfs-utils.x86_64 0:75-8.el5 set to be updated ---> Package file.x86_64 0:4.17-21 set to be updated ---> Package finger.x86_64 0:0.17-33 set to be updated ---> Package firstboot-tui.x86_64 0:1.4.27.9-1.el5.centos set to be updated ---> Package freetype.x86_64 0:2.2.1-28.el5_7.2 set to be updated ---> Package freetype-devel.x86_64 0:2.2.1-28.el5_7.2 set to be updated ---> Package ftp.x86_64 0:0.17-37.el5 set to be updated ---> Package gamin.x86_64 0:0.1.7-10.el5 set to be updated ---> Package gamin-python.x86_64 0:0.1.7-10.el5 set to be updated ---> Package gawk.x86_64 0:3.1.5-15.el5 set to be updated ---> Package gcc.x86_64 0:4.1.2-52.el5 set to be updated ---> Package gcc-c++.x86_64 0:4.1.2-52.el5 set to be updated ---> Package glibc.i686 0:2.5-81.el5_8.1 set to be updated ---> Package glibc.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package glibc-common.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package glibc-devel.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package glibc-headers.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package gnutls.x86_64 0:1.4.1-7.el5_8.2 set to be updated ---> Package groff.x86_64 0:1.18.1.1-13.el5 set to be updated ---> Package gtk2.x86_64 0:2.10.4-21.el5_7.7 set to be updated ---> Package gzip.x86_64 0:1.3.5-13.el5.centos set to be updated ---> Package hmaccalc.x86_64 0:0.9.6-4.el5 set to be updated ---> Package htop.x86_64 0:1.0.1-2.el5.rf set to be updated ---> Package hwdata.noarch 0:0.213.26-1.el5 set to be updated ---> Package ifd-egate.x86_64 0:0.05-17.el5 set to be updated ---> Package initscripts.x86_64 0:8.45.42-1.el5.centos set to be updated ---> Package iproute.x86_64 0:2.6.18-13.el5 set to be updated ---> Package iptables.x86_64 0:1.3.5-9.1.el5 set to be updated ---> Package iptables-ipv6.x86_64 0:1.3.5-9.1.el5 set to be updated ---> Package iscsi-initiator-utils.x86_64 0:6.2.0.872-13.el5 set to be updated ---> Package kernel.x86_64 0:2.6.18-308.1.1.el5 set to be installed ---> Package kernel-headers.x86_64 0:2.6.18-308.1.1.el5 set to be updated ---> Package kpartx.x86_64 0:0.4.7-48.el5_8.1 set to be updated ---> Package krb5-devel.x86_64 0:1.6.1-70.el5 set to be updated ---> Package krb5-libs.x86_64 0:1.6.1-70.el5 set to be updated ---> Package krb5-workstation.x86_64 0:1.6.1-70.el5 set to be updated ---> Package ksh.x86_64 0:20100621-5.el5_8.1 set to be updated ---> Package kudzu.x86_64 0:1.2.57.1.26-3.el5.centos set to be updated ---> Package less.x86_64 0:436-9.el5 set to be updated ---> Package lftp.x86_64 0:3.7.11-7.el5 set to be updated ---> Package libX11.x86_64 0:1.0.3-11.el5_7.1 set to be updated ---> Package libX11-devel.x86_64 0:1.0.3-11.el5_7.1 set to be updated ---> Package libXcursor.x86_64 0:1.1.7-1.2 set to be updated ---> Package libacl.x86_64 0:2.2.39-8.el5 set to be updated ---> Package libgcc.x86_64 0:4.1.2-52.el5 set to be updated ---> Package libgomp.x86_64 0:4.4.6-3.el5.1 set to be updated ---> Package libpng.x86_64 2:1.2.10-16.el5_8 set to be updated ---> Package libpng-devel.x86_64 2:1.2.10-16.el5_8 set to be updated ---> Package libsmbios.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package libstdc++.x86_64 0:4.1.2-52.el5 set to be updated ---> Package libstdc++-devel.x86_64 0:4.1.2-52.el5 set to be updated ---> Package libsysfs.x86_64 0:2.1.0-1.el5 set to be updated ---> Package libusb.x86_64 0:0.1.12-6.el5 set to be updated ---> Package libvolume_id.x86_64 0:095-14.27.el5_7.1 set to be updated ---> Package libxml2.x86_64 0:2.6.26-2.1.15.el5_8.2 set to be updated ---> Package libxml2-python.x86_64 0:2.6.26-2.1.15.el5_8.2 set to be updated ---> Package logrotate.x86_64 0:3.7.4-12 set to be updated ---> Package lsof.x86_64 0:4.78-6 set to be updated ---> Package lvm2.x86_64 0:2.02.88-7.el5 set to be updated ---> Package m2crypto.x86_64 0:0.16-8.el5 set to be updated ---> Package man.x86_64 0:1.6d-2.el5 set to be updated ---> Package man-pages.noarch 0:2.39-20.el5 set to be updated ---> Package mcelog.x86_64 1:0.9pre-1.32.el5 set to be updated ---> Package mesa-libGL.x86_64 0:6.5.1-7.10.el5 set to be updated ---> Package mesa-libGL-devel.x86_64 0:6.5.1-7.10.el5 set to be updated ---> Package microcode_ctl.x86_64 2:1.17-1.56.el5 set to be updated ---> Package mkinitrd.x86_64 0:5.1.19.6-75.el5 set to be updated ---> Package mktemp.x86_64 3:1.5-24.el5 set to be updated --> Processing Dependency: nash = 5.1.19.6-68.el5_6.1 for package: mkinitrd ---> Package nash.x86_64 0:5.1.19.6-75.el5 set to be updated ---> Package net-snmp.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-snmp-devel.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-snmp-libs.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-snmp-utils.x86_64 1:5.3.2.2-17.el5 set to be updated ---> Package net-tools.x86_64 0:1.60-82.el5 set to be updated ---> Package nfs-utils.x86_64 1:1.0.9-60.el5 set to be updated ---> Package nfs-utils-lib.x86_64 0:1.0.8-7.9.el5 set to be updated ---> Package nscd.x86_64 0:2.5-81.el5_8.1 set to be updated ---> Package nspr.x86_64 0:4.8.9-1.el5_8 set to be updated ---> Package nspr-devel.x86_64 0:4.8.9-1.el5_8 set to be updated ---> Package nss.x86_64 0:3.13.1-5.el5_8 set to be updated ---> Package nss-devel.x86_64 0:3.13.1-5.el5_8 set to be updated ---> Package nss-tools.x86_64 0:3.13.1-5.el5_8 set to be updated ---> Package nss_ldap.x86_64 0:253-49.el5 set to be updated ---> Package ntp.x86_64 0:4.2.2p1-15.el5.centos.1 set to be updated ---> Package numactl.x86_64 0:0.9.8-12.el5_6 set to be updated ---> Package oddjob.x86_64 0:0.27-12.el5 set to be updated ---> Package oddjob-libs.x86_64 0:0.27-12.el5 set to be updated ---> Package openldap.x86_64 0:2.3.43-25.el5 set to be updated ---> Package openssh.x86_64 0:4.3p2-82.el5 set to be updated ---> Package openssh-clients.x86_64 0:4.3p2-82.el5 set to be updated ---> Package openssh-server.x86_64 0:4.3p2-82.el5 set to be updated ---> Package openssl.i686 0:0.9.8e-22.el5_8.1 set to be updated ---> Package openssl.x86_64 0:0.9.8e-22.el5_8.1 set to be updated ---> Package openssl-devel.x86_64 0:0.9.8e-22.el5_8.1 set to be updated ---> Package pam_krb5.x86_64 0:2.2.14-22.el5 set to be updated ---> Package pam_pkcs11.x86_64 0:0.5.3-26.el5 set to be updated ---> Package pango.x86_64 0:1.14.9-8.el5.centos.3 set to be updated ---> Package parted.x86_64 0:1.8.1-29.el5 set to be updated ---> Package pciutils.x86_64 0:3.1.7-5.el5 set to be updated ---> Package perl.x86_64 4:5.8.8-38.el5 set to be updated ---> Package perl-Compress-Raw-Bzip2.x86_64 0:2.037-1.el5.rf set to be updated ---> Package perl-Compress-Raw-Zlib.x86_64 0:2.037-1.el5.rf set to be updated ---> Package perl-rrdtool.x86_64 0:1.4.7-1.el5.rf set to be updated ---> Package poppler.x86_64 0:0.5.4-19.el5 set to be updated ---> Package poppler-utils.x86_64 0:0.5.4-19.el5 set to be updated ---> Package popt.x86_64 0:1.10.2.3-28.el5_8 set to be updated ---> Package postgresql-libs.x86_64 0:8.1.23-1.el5_7.3 set to be updated ---> Package procps.x86_64 0:3.2.7-18.el5 set to be updated ---> Package proftpd.x86_64 0:1.3.4a-1.el5.rf set to be updated --> Processing Dependency: perl(Mail::Sendmail) for package: proftpd ---> Package python.x86_64 0:2.4.3-46.el5 set to be updated ---> Package python-ctypes.x86_64 0:1.0.2-3.el5 set to be updated ---> Package python-libs.x86_64 0:2.4.3-46.el5 set to be updated ---> Package python-smbios.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package rhpl.x86_64 0:0.194.1-2 set to be updated ---> Package rmt.x86_64 0:0.4b41-6.el5 set to be updated ---> Package rng-utils.x86_64 1:2.0-5.el5 set to be updated ---> Package rpm.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-build.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-devel.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-libs.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rpm-python.x86_64 0:4.4.2.3-28.el5_8 set to be updated ---> Package rrdtool.x86_64 0:1.4.7-1.el5.rf set to be updated ---> Package rsh.x86_64 0:0.17-40.el5_7.1 set to be updated ---> Package rsync.x86_64 0:3.0.6-4.el5_7.1 set to be updated ---> Package ruby.x86_64 0:1.8.5-24.el5 set to be updated ---> Package ruby-libs.x86_64 0:1.8.5-24.el5 set to be updated ---> Package sblim-sfcb.x86_64 0:1.3.11-49.el5 set to be updated ---> Package sblim-sfcc.x86_64 0:2.2.2-49.el5 set to be updated ---> Package selinux-policy.noarch 0:2.4.6-327.el5 set to be updated ---> Package selinux-policy-targeted.noarch 0:2.4.6-327.el5 set to be updated ---> Package setup.noarch 0:2.5.58-9.el5 set to be updated ---> Package shadow-utils.x86_64 2:4.0.17-20.el5 set to be updated ---> Package smartmontools.x86_64 1:5.38-3.el5 set to be updated ---> Package smbios-utils-bin.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package smbios-utils-python.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package sos.noarch 0:1.7-9.62.el5 set to be updated ---> Package srvadmin-omilcore.x86_64 0:6.5.0-1.452.1.el5 set to be updated ---> Package strace.x86_64 0:4.5.18-11.el5_8 set to be updated ---> Package subversion.x86_64 0:1.6.11-7.el5_6.4 set to be updated ---> Package sudo.x86_64 0:1.7.2p1-13.el5 set to be updated ---> Package sysfsutils.x86_64 0:2.1.0-1.el5 set to be updated ---> Package syslinux.x86_64 0:3.11-7 set to be updated ---> Package system-config-network-tui.noarch 0:1.3.99.21-1.el5 set to be updated ---> Package talk.x86_64 0:0.17-31.el5 set to be updated ---> Package tar.x86_64 2:1.15.1-31.el5 set to be updated ---> Package traceroute.x86_64 3:2.0.1-6.el5 set to be updated ---> Package tzdata.x86_64 0:2012b-3.el5 set to be updated ---> Package udev.x86_64 0:095-14.27.el5_7.1 set to be updated ---> Package util-linux.x86_64 0:2.13-0.59.el5 set to be updated ---> Package vixie-cron.x86_64 4:4.1-81.el5 set to be updated ---> Package wget.x86_64 0:1.11.4-3.el5_8.1 set to be updated ---> Package xinetd.x86_64 2:2.3.14-16.el5 set to be updated ---> Package yp-tools.x86_64 0:2.9-2.el5 set to be updated ---> Package ypbind.x86_64 3:1.19-12.el5_6.1 set to be updated ---> Package yum.noarch 0:3.2.22-39.el5.centos set to be updated ---> Package yum-dellsysid.x86_64 0:2.2.27-3.2.el5 set to be updated ---> Package yum-fastestmirror.noarch 0:1.1.16-21.el5.centos set to be updated ---> Package zlib.x86_64 0:1.2.3-4.el5 set to be updated ---> Package zlib-devel.x86_64 0:1.2.3-4.el5 set to be updated --> Running transaction check --> Processing Dependency: dbus = 1.1.2-15.el5_6 for package: dbus-libs --> Processing Dependency: nash = 5.1.19.6-68.el5_6.1 for package: mkinitrd ---> Package perl-Mail-Sendmail.noarch 0:0.79-1.2.el5.rf set to be updated base/filelists | 3.5 MB 00:00 dell-omsa-indep/filelists | 195 kB 00:01 dell-omsa-specific/filelists | 1.0 kB 00:00 extras/filelists_db | 224 kB 00:00 rpmforge/filelists | 4.8 MB 00:06 updates/filelists_db | 715 kB 00:00 --> Finished Dependency Resolution dbus-libs-1.1.2-15.el5_6.i386 from installed has depsolving problems --> Missing Dependency: dbus = 1.1.2-15.el5_6 is needed by package dbus-libs-1.1.2-15.el5_6.i386 (installed) mkinitrd-5.1.19.6-68.el5_6.1.i386 from installed has depsolving problems --> Missing Dependency: nash = 5.1.19.6-68.el5_6.1 is needed by package mkinitrd-5.1.19.6-68.el5_6.1.i386 (installed) Error: Missing Dependency: nash = 5.1.19.6-68.el5_6.1 is needed by package mkinitrd-5.1.19.6-68.el5_6.1.i386 (installed) Error: Missing Dependency: dbus = 1.1.2-15.el5_6 is needed by package dbus-libs-1.1.2-15.el5_6.i386 (installed) You could try using --skip-broken to work around the problem You could try running: package-cleanup --problems package-cleanup --dupes rpm -Va --nofiles --nodigest The program package-cleanup is found in the yum-utils package. I have tried running package-cleanup --dupes and package-cleanup --problems but to no avail.

    Read the article

  • should i bother to block these- rather lame attempt at hacking my server

    - by The Journeyman geek
    I'm running a LAMP stack, with no phpmyadmin (yes) installed. While poking through my apache server longs i noticed things like. 74.208.75.29 - - [16/Mar/2010:02:53:45 +0800] "POST http://74.208.75.29:6667/ HTTP/1.0" 404 481 "-" "-" 74.208.75.29 - - [16/Mar/2010:02:53:45 +0800] "CONNECT 74.208.75.29:6667 HTTP/1.0" 405 547 "-" "-" 66.184.178.58 - - [16/Mar/2010:13:27:59 +0800] "GET / HTTP/1.1" 200 1170 "-" "Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)" 200.78.247.148 - - [16/Mar/2010:15:26:05 +0800] "GET /w00tw00t.at.ISC.SANS.DFind:) HTTP/1.1" 400 506 "-" "-" 206.47.160.224 - - [16/Mar/2010:17:27:57 +0800] "GET / HTTP/1.1" 200 1170 "-" "Mozilla/4.0 (compatible; MSIE 5.5; Windows 98)" 190.220.14.195 - - [17/Mar/2010:01:28:02 +0800] "GET //phpmyadmin/config/config.inc.php?p=phpinfo(); HTTP/1.1" 404 480 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 190.220.14.195 - - [17/Mar/2010:01:28:03 +0800] "GET //pma/config/config.inc.php?p=phpinfo(); HTTP/1.1" 404 476 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 190.220.14.195 - - [17/Mar/2010:01:28:04 +0800] "GET //admin/config/config.inc.php?p=phpinfo(); HTTP/1.1" 404 478 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 190.220.14.195 - - [17/Mar/2010:01:28:05 +0800] "GET //dbadmin/config/config.inc.php?p=phpinfo(); HTTP/1.1" 404 479 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 190.220.14.195 - - [17/Mar/2010:01:28:05 +0800] "GET //mysql/config/config.inc.php?p=phpinfo(); HTTP/1.1" 404 479 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" 190.220.14.195 - - [17/Mar/2010:01:28:06 +0800] "GET //php-my-admin/config/config.inc.php?p=phpinfo(); HTTP/1.1" 404 482 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows 98)" What exactly is happening? is it a really lame attempt at hacking in? Should i bother blocking the ip addresses these are from, or just leave it?

    Read the article

  • Sorting and Re-arranging List of HashMaps

    - by HonorGod
    I have a List which is straight forward representation of a database table. I am trying to sort and apply some magic after the data is loaded into List of HashMaps. In my case this is the only hard and fast way of doing it becoz I have a rules engine that actually updates the values in the HashMap after several computations. Here is a sample data representation of the HashMap (List of HashMap) - {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=21, toDate=Tue Mar 23 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=456} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=20, toDate=Thu Apr 01 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 24 10:54:12 EDT 2010, eventId=22, toDate=Sat Mar 27 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Fri Mar 26 10:54:12 EDT 2010, actionId=1234} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 31 10:54:12 EDT 2010, actionId=1234} {fromDate=Mon Mar 15 10:54:12 EDT 2010, eventId=12, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=567} I am trying to achieve couple of things - 1) Sort the list by actionId and eventId after which the data would look like - {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=456} {fromDate=Mon Mar 15 10:54:12 EDT 2010, eventId=12, toDate=Wed Mar 17 10:54:12 EDT 2010, actionId=567} {fromDate=Wed Mar 24 10:54:12 EDT 2010, eventId=22, toDate=Sat Mar 27 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=21, toDate=Tue Mar 23 10:54:12 EDT 2010, actionId=1234} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=20, toDate=Thu Apr 01 10:54:12 EDT 2010, actionId=1234} {fromDate=Wed Mar 17 10:54:12 EDT 2010, eventId=11, toDate=Fri Mar 26 10:54:12 EDT 2010, actionId=1234} {fromDate=Sat Mar 20 10:54:12 EDT 2010, eventId=11, toDate=Wed Mar 31 10:54:12 EDT 2010, actionId=1234} 2) If we group the above list by actionId they would be resolved into 3 groups - actionId=1234, actionId=567 and actionId=456. Now here is my question - For each group having the same eventId, I need to update the records so that they have wider fromDate to toDate. Meaning, if you consider the last two rows they have same actionId = 1234 and same eventId = 11. Now we can to pick the least fromDate from those 2 records which is Wed Mar 17 10:54:12 and farther toDate which is Wed Mar 31 10:54:12 and update those 2 record's fromDate and toDate to Wed Mar 17 10:54:12 and Wed Mar 31 10:54:12 respectively. Any ideas? PS: I already have some pseudo code to start with. import java.util.ArrayList; import java.util.Calendar; import java.util.Collections; import java.util.Comparator; import java.util.Date; import java.util.HashMap; import java.util.List; import org.apache.commons.lang.builder.CompareToBuilder; public class Tester { boolean ascending = true ; boolean sortInstrumentIdAsc = true ; boolean sortEventTypeIdAsc = true ; public static void main(String args[]) { Tester tester = new Tester() ; tester.printValues() ; } public void printValues () { List<HashMap<String,Object>> list = new ArrayList<HashMap<String,Object>>() ; HashMap<String,Object> map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(21)) ; map.put("fromDate", getDate(1) ) ; map.put("toDate", getDate(7) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(456)) ; map.put("eventId", new Integer(11)) ; map.put("fromDate", getDate(1)) ; map.put("toDate", getDate(1) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(20)) ; map.put("fromDate", getDate(4) ) ; map.put("toDate", getDate(16) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(22)) ; map.put("fromDate",getDate(8) ) ; map.put("toDate", getDate(11)) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(11)) ; map.put("fromDate",getDate(1) ) ; map.put("toDate", getDate(10) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(1234)) ; map.put("eventId", new Integer(11)) ; map.put("fromDate",getDate(4) ) ; map.put("toDate", getDate(15) ) ; list.add(map); map = new HashMap<String,Object>(); map.put("actionId", new Integer(567)) ; map.put("eventId", new Integer(12)) ; map.put("fromDate", getDate(-1) ) ; map.put("toDate",getDate(1)) ; list.add(map); System.out.println("\n Before Sorting \n "); for(int j = 0 ; j < list.size() ; j ++ ) System.out.println(list.get(j)); Collections.sort ( list , new HashMapComparator2 () ) ; System.out.println("\n After Sorting \n "); for(int j = 0 ; j < list.size() ; j ++ ) System.out.println(list.get(j)); } public static Date getDate(int days) { Calendar cal = Calendar.getInstance(); cal.setTime(new Date()); cal.add(Calendar.DATE, days); return cal.getTime() ; } public class HashMapComparator2 implements Comparator { public int compare ( Object object1 , Object object2 ) { if ( ascending == true ) { return new CompareToBuilder() .append(( ( HashMap ) object1 ).get ( "actionId" ), ( ( HashMap ) object2 ).get ( "actionId" )) .append(( ( HashMap ) object2 ).get ( "eventId" ), ( ( HashMap ) object1 ).get ( "eventId" )) .toComparison(); } else { return new CompareToBuilder() .append(( ( HashMap ) object2 ).get ( "actionId" ), ( ( HashMap ) object1 ).get ( "actionId" )) .append(( ( HashMap ) object2 ).get ( "eventId" ), ( ( HashMap ) object1 ).get ( "eventId" )) .toComparison(); } } } }

    Read the article

  • Override Linq-to-Sql Datetime.ToString() Default Convert Values

    - by snmcdonald
    Is it possible to override the default CONVERT style? I would like the default CONVERT function to always return ISO8601 style 126. Steps To Reproduce: DROP TABLE DATES; CREATE TABLE DATES ( ID INT IDENTITY(1,1) PRIMARY KEY, MYDATE DATETIME DEFAULT(GETUTCDATE()) ); INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; INSERT INTO DATES DEFAULT VALUES; SELECT CONVERT(NVARCHAR,MYDATE) AS CONVERTED, CONVERT(NVARCHAR(4000),MYDATE,126) AS ISO, MYDATE FROM DATES WHERE MYDATE LIKE'Feb%' Output: CONVERTED ISO MYDATE --------------------------- ---------------------------- ----------------------- Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Feb 8 2011 12:17AM 2011-02-08T00:17:03.040 2011-02-08 00:17:03.040 Linq-to-Sql calls CONVERT(NVARCHAR,@p) when I cast ToString(). However, I am displaying all my data in the ISO8601 format. I would like to override the database default if possible to CONVERT(NVARCHAR,@p,126). I am using Dynamic Linq-to-Sql as demoed by ScottGu to process my data. PropertyInfo piField = typeof(T).GetProperty(rule.field); if (piField != null) { Type typeField = piField.PropertyType; if (typeField.IsGenericType && typeField.GetGenericTypeDefinition().Equals(typeof(Nullable<>))) { filter = filter .Select(x => x) .Where(string.Format("{0} != null", rule.field)) .Where(string.Format("{0}.Value.ToString().Contains(\"{1}\")", rule.field, rule.data)); } else { filter = filter .Select(x => x) .Where(string.Format("{0} != null", rule.field)) .Where(string.Format("{0}.ToString().Contains(\"{1}\")", rule.field, rule.data)); } } I was hoping my property would convert the expression from CONVERT(NVARCHAR,@p) to CONVERT(NVARCHAR,@p,126), however I get a NotSupportedException: ... has no supported translation to SQL. public string IsoDate { get { if (SUBMIT_DATE.HasValue) { return SUBMIT_DATE.Value.ToString("o"); } else { return string.Empty; } } }

    Read the article

  • Set primary key with two integers

    - by user299196
    I have a table with primary key (ColumnA, ColumnB). I want to make a function or procedure that when passed two integers will insert a row into the table but make sure the largest integer always goes into ColumnA and the smaller one into ColumnB. So if we have SetKeysWithTheseNumbers(17, 19) would return |-----------------| |ColumnA | ColumnB| |-----------------| |19 | 17 | |-----------------| SetKeysWithTheseNumbers(19, 17) would return the same thing |-----------------| |ColumnA | ColumnB| |-----------------| |19 | 17 | |-----------------|

    Read the article

  • C# DateTime manipulation

    - by kb
    Hi I have a list of dates for an event in the following format 13/04/2010 10:30:00 13/04/2010 13:30:00 14/04/2010 10:30:00 14/04/2010 13:30:00 15/04/2010 10:30:00 15/04/2010 13:30:00 16/04/2010 10:30:00 17/04/2010 11:00:00 17/04/2010 13:30:00 17/04/2010 15:30:00 How can i have the list be output, so that the date is only displayed once followed by the times for that date, so the above list would look like something like this: 13/04/2010 10:30:00 13:30:00 14/04/2010 10:30:00 13:30:00 15/04/2010 10:30:00 13:30:00 16/04/2010 10:30:00 17/04/2010 11:00:00 13:30:00 15:30:00

    Read the article

  • sort date fields to obtain earliest date

    - by manu
    in my database , dates are stored in DD-mm-yyyy format , how can i sort this to obtain the earliest date ? Cursor c = myDb.query(TABLE, new String[]{"dob"}, null, null, null, null, "dob"); I have selected it to order by dob field but its not ordered ... This is the output for the above query 01-03 17:14:51.595: VERBOSE/ORDER DOB(1431): 01-11-1977 01-03 17:14:51.595: VERBOSE/ORDER DOB(1431): 01-12-1988 01-03 17:14:51.614: VERBOSE/ORDER DOB(1431): 15-01-1977 01-03 17:14:51.656: VERBOSE/ORDER DOB(1431): 31-01-1988

    Read the article

  • Upload File to Windows Azure Blob in Chunks through ASP.NET MVC, JavaScript and HTML5

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/07/01/upload-file-to-windows-azure-blob-in-chunks-through-asp.net.aspxMany people are using Windows Azure Blob Storage to store their data in the cloud. Blob storage provides 99.9% availability with easy-to-use API through .NET SDK and HTTP REST. For example, we can store JavaScript files, images, documents in blob storage when we are building an ASP.NET web application on a Web Role in Windows Azure. Or we can store our VHD files in blob and mount it as a hard drive in our cloud service. If you are familiar with Windows Azure, you should know that there are two kinds of blob: page blob and block blob. The page blob is optimized for random read and write, which is very useful when you need to store VHD files. The block blob is optimized for sequential/chunk read and write, which has more common usage. Since we can upload block blob in blocks through BlockBlob.PutBlock, and them commit them as a whole blob with invoking the BlockBlob.PutBlockList, it is very powerful to upload large files, as we can upload blocks in parallel, and provide pause-resume feature. There are many documents, articles and blog posts described on how to upload a block blob. Most of them are focus on the server side, which means when you had received a big file, stream or binaries, how to upload them into blob storage in blocks through .NET SDK.  But the problem is, how can we upload these large files from client side, for example, a browser. This questioned to me when I was working with a Chinese customer to help them build a network disk production on top of azure. The end users upload their files from the web portal, and then the files will be stored in blob storage from the Web Role. My goal is to find the best way to transform the file from client (end user’s machine) to the server (Web Role) through browser. In this post I will demonstrate and describe what I had done, to upload large file in chunks with high speed, and save them as blocks into Windows Azure Blob Storage.   Traditional Upload, Works with Limitation The simplest way to implement this requirement is to create a web page with a form that contains a file input element and a submit button. 1: @using (Html.BeginForm("About", "Index", FormMethod.Post, new { enctype = "multipart/form-data" })) 2: { 3: <input type="file" name="file" /> 4: <input type="submit" value="upload" /> 5: } And then in the backend controller, we retrieve the whole content of this file and upload it in to the blob storage through .NET SDK. We can split the file in blocks and upload them in parallel and commit. The code had been well blogged in the community. 1: [HttpPost] 2: public ActionResult About(HttpPostedFileBase file) 3: { 4: var container = _client.GetContainerReference("test"); 5: container.CreateIfNotExists(); 6: var blob = container.GetBlockBlobReference(file.FileName); 7: var blockDataList = new Dictionary<string, byte[]>(); 8: using (var stream = file.InputStream) 9: { 10: var blockSizeInKB = 1024; 11: var offset = 0; 12: var index = 0; 13: while (offset < stream.Length) 14: { 15: var readLength = Math.Min(1024 * blockSizeInKB, (int)stream.Length - offset); 16: var blockData = new byte[readLength]; 17: offset += stream.Read(blockData, 0, readLength); 18: blockDataList.Add(Convert.ToBase64String(BitConverter.GetBytes(index)), blockData); 19:  20: index++; 21: } 22: } 23:  24: Parallel.ForEach(blockDataList, (bi) => 25: { 26: blob.PutBlock(bi.Key, new MemoryStream(bi.Value), null); 27: }); 28: blob.PutBlockList(blockDataList.Select(b => b.Key).ToArray()); 29:  30: return RedirectToAction("About"); 31: } This works perfect if we selected an image, a music or a small video to upload. But if I selected a large file, let’s say a 6GB HD-movie, after upload for about few minutes the page will be shown as below and the upload will be terminated. In ASP.NET there is a limitation of request length and the maximized request length is defined in the web.config file. It’s a number which less than about 4GB. So if we want to upload a really big file, we cannot simply implement in this way. Also, in Windows Azure, a cloud service network load balancer will terminate the connection if exceed the timeout period. From my test the timeout looks like 2 - 3 minutes. Hence, when we need to upload a large file we cannot just use the basic HTML elements. Besides the limitation mentioned above, the simple HTML file upload cannot provide rich upload experience such as chunk upload, pause and pause-resume. So we need to find a better way to upload large file from the client to the server.   Upload in Chunks through HTML5 and JavaScript In order to break those limitation mentioned above we will try to upload the large file in chunks. This takes some benefit to us such as - No request size limitation: Since we upload in chunks, we can define the request size for each chunks regardless how big the entire file is. - No timeout problem: The size of chunks are controlled by us, which means we should be able to make sure request for each chunk upload will not exceed the timeout period of both ASP.NET and Windows Azure load balancer. It was a big challenge to upload big file in chunks until we have HTML5. There are some new features and improvements introduced in HTML5 and we will use them to implement our solution.   In HTML5, the File interface had been improved with a new method called “slice”. It can be used to read part of the file by specifying the start byte index and the end byte index. For example if the entire file was 1024 bytes, file.slice(512, 768) will read the part of this file from the 512nd byte to 768th byte, and return a new object of interface called "Blob”, which you can treat as an array of bytes. In fact,  a Blob object represents a file-like object of immutable, raw data. The File interface is based on Blob, inheriting blob functionality and expanding it to support files on the user's system. For more information about the Blob please refer here. File and Blob is very useful to implement the chunk upload. We will use File interface to represent the file the user selected from the browser and then use File.slice to read the file in chunks in the size we wanted. For example, if we wanted to upload a 10MB file with 512KB chunks, then we can read it in 512KB blobs by using File.slice in a loop.   Assuming we have a web page as below. User can select a file, an input box to specify the block size in KB and a button to start upload. 1: <div> 2: <input type="file" id="upload_files" name="files[]" /><br /> 3: Block Size: <input type="number" id="block_size" value="512" name="block_size" />KB<br /> 4: <input type="button" id="upload_button_blob" name="upload" value="upload (blob)" /> 5: </div> Then we can have the JavaScript function to upload the file in chunks when user clicked the button. 1: <script type="text/javascript"> 1: 2: $(function () { 3: $("#upload_button_blob").click(function () { 4: }); 5: });</script> Firstly we need to ensure the client browser supports the interfaces we are going to use. Just try to invoke the File, Blob and FormData from the “window” object. If any of them is “undefined” the condition result will be “false” which means your browser doesn’t support these premium feature and it’s time for you to get your browser updated. FormData is another new feature we are going to use in the future. It could generate a temporary form for us. We will use this interface to create a form with chunk and associated metadata when invoked the service through ajax. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: if (window.File && window.Blob && window.FormData) { 4: alert("Your brwoser is awesome, let's rock!"); 5: } 6: else { 7: alert("Oh man plz update to a modern browser before try is cool stuff out."); 8: return; 9: } 10: }); Each browser supports these interfaces by their own implementation and currently the Blob, File and File.slice are supported by Chrome 21, FireFox 13, IE 10, Opera 12 and Safari 5.1 or higher. After that we worked on the files the user selected one by one since in HTML5, user can select multiple files in one file input box. 1: var files = $("#upload_files")[0].files; 2: for (var i = 0; i < files.length; i++) { 3: var file = files[i]; 4: var fileSize = file.size; 5: var fileName = file.name; 6: } Next, we calculated the start index and end index for each chunks based on the size the user specified from the browser. We put them into an array with the file name and the index, which will be used when we upload chunks into Windows Azure Blob Storage as blocks since we need to specify the target blob name and the block index. At the same time we will store the list of all indexes into another variant which will be used to commit blocks into blob in Azure Storage once all chunks had been uploaded successfully. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10:  11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: var blockSizeInKB = $("#block_size").val(); 14: var blockSize = blockSizeInKB * 1024; 15: var blocks = []; 16: var offset = 0; 17: var index = 0; 18: var list = ""; 19: while (offset < fileSize) { 20: var start = offset; 21: var end = Math.min(offset + blockSize, fileSize); 22:  23: blocks.push({ 24: name: fileName, 25: index: index, 26: start: start, 27: end: end 28: }); 29: list += index + ","; 30:  31: offset = end; 32: index++; 33: } 34: } 35: }); Now we have all chunks’ information ready. The next step should be upload them one by one to the server side, and at the server side when received a chunk it will upload as a block into Blob Storage, and finally commit them with the index list through BlockBlobClient.PutBlockList. But since all these invokes are ajax calling, which means not synchronized call. So we need to introduce a new JavaScript library to help us coordinate the asynchronize operation, which named “async.js”. You can download this JavaScript library here, and you can find the document here. I will not explain this library too much in this post. We will put all procedures we want to execute as a function array, and pass into the proper function defined in async.js to let it help us to control the execution sequence, in series or in parallel. Hence we will define an array and put the function for chunk upload into this array. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4:  5: // start to upload each files in chunks 6: var files = $("#upload_files")[0].files; 7: for (var i = 0; i < files.length; i++) { 8: var file = files[i]; 9: var fileSize = file.size; 10: var fileName = file.name; 11: // calculate the start and end byte index for each blocks(chunks) 12: // with the index, file name and index list for future using 13: ... ... 14:  15: // define the function array and push all chunk upload operation into this array 16: blocks.forEach(function (block) { 17: putBlocks.push(function (callback) { 18: }); 19: }); 20: } 21: }); 22: }); As you can see, I used File.slice method to read each chunks based on the start and end byte index we calculated previously, and constructed a temporary HTML form with the file name, chunk index and chunk data through another new feature in HTML5 named FormData. Then post this form to the backend server through jQuery.ajax. This is the key part of our solution. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: blocks.forEach(function (block) { 15: putBlocks.push(function (callback) { 16: // load blob based on the start and end index for each chunks 17: var blob = file.slice(block.start, block.end); 18: // put the file name, index and blob into a temporary from 19: var fd = new FormData(); 20: fd.append("name", block.name); 21: fd.append("index", block.index); 22: fd.append("file", blob); 23: // post the form to backend service (asp.net mvc controller action) 24: $.ajax({ 25: url: "/Home/UploadInFormData", 26: data: fd, 27: processData: false, 28: contentType: "multipart/form-data", 29: type: "POST", 30: success: function (result) { 31: if (!result.success) { 32: alert(result.error); 33: } 34: callback(null, block.index); 35: } 36: }); 37: }); 38: }); 39: } 40: }); Then we will invoke these functions one by one by using the async.js. And once all functions had been executed successfully I invoked another ajax call to the backend service to commit all these chunks (blocks) as the blob in Windows Azure Storage. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.series(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); That’s all in the client side. The outline of our logic would be - Calculate the start and end byte index for each chunks based on the block size. - Defined the functions of reading the chunk form file and upload the content to the backend service through ajax. - Execute the functions defined in previous step with “async.js”. - Commit the chunks by invoking the backend service in Windows Azure Storage finally.   Save Chunks as Blocks into Blob Storage In above we finished the client size JavaScript code. It uploaded the file in chunks to the backend service which we are going to implement in this step. We will use ASP.NET MVC as our backend service, and it will receive the chunks, upload into Windows Azure Bob Storage in blocks, then finally commit as one blob. As in the client side we uploaded chunks by invoking the ajax call to the URL "/Home/UploadInFormData", I created a new action under the Index controller and it only accepts HTTP POST request. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: } 8: catch (Exception e) 9: { 10: error = e.ToString(); 11: } 12:  13: return new JsonResult() 14: { 15: Data = new 16: { 17: success = string.IsNullOrWhiteSpace(error), 18: error = error 19: } 20: }; 21: } Then I retrieved the file name, index and the chunk content from the Request.Form object, which was passed from our client side. And then, used the Windows Azure SDK to create a blob container (in this case we will use the container named “test”.) and create a blob reference with the blob name (same as the file name). Then uploaded the chunk as a block of this blob with the index, since in Blob Storage each block must have an index (ID) associated with so that finally we can put all blocks as one blob by specifying their block ID list. 1: [HttpPost] 2: public JsonResult UploadInFormData() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var index = int.Parse(Request.Form["index"]); 9: var file = Request.Files[0]; 10: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 11:  12: var container = _client.GetContainerReference("test"); 13: container.CreateIfNotExists(); 14: var blob = container.GetBlockBlobReference(name); 15: blob.PutBlock(id, file.InputStream, null); 16: } 17: catch (Exception e) 18: { 19: error = e.ToString(); 20: } 21:  22: return new JsonResult() 23: { 24: Data = new 25: { 26: success = string.IsNullOrWhiteSpace(error), 27: error = error 28: } 29: }; 30: } Next, I created another action to commit the blocks into blob once all chunks had been uploaded. Similarly, I retrieved the blob name from the Request.Form. I also retrieved the chunks ID list, which is the block ID list from the Request.Form in a string format, split them as a list, then invoked the BlockBlob.PutBlockList method. After that our blob will be shown in the container and ready to be download. 1: [HttpPost] 2: public JsonResult Commit() 3: { 4: var error = string.Empty; 5: try 6: { 7: var name = Request.Form["name"]; 8: var list = Request.Form["list"]; 9: var ids = list 10: .Split(',') 11: .Where(id => !string.IsNullOrWhiteSpace(id)) 12: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 13: .ToArray(); 14:  15: var container = _client.GetContainerReference("test"); 16: container.CreateIfNotExists(); 17: var blob = container.GetBlockBlobReference(name); 18: blob.PutBlockList(ids); 19: } 20: catch (Exception e) 21: { 22: error = e.ToString(); 23: } 24:  25: return new JsonResult() 26: { 27: Data = new 28: { 29: success = string.IsNullOrWhiteSpace(error), 30: error = error 31: } 32: }; 33: } Now we finished all code we need. The whole process of uploading would be like this below. Below is the full client side JavaScript code. 1: <script type="text/javascript" src="~/Scripts/async.js"></script> 2: <script type="text/javascript"> 3: $(function () { 4: $("#upload_button_blob").click(function () { 5: // assert the browser support html5 6: if (window.File && window.Blob && window.FormData) { 7: alert("Your brwoser is awesome, let's rock!"); 8: } 9: else { 10: alert("Oh man plz update to a modern browser before try is cool stuff out."); 11: return; 12: } 13:  14: // start to upload each files in chunks 15: var files = $("#upload_files")[0].files; 16: for (var i = 0; i < files.length; i++) { 17: var file = files[i]; 18: var fileSize = file.size; 19: var fileName = file.name; 20:  21: // calculate the start and end byte index for each blocks(chunks) 22: // with the index, file name and index list for future using 23: var blockSizeInKB = $("#block_size").val(); 24: var blockSize = blockSizeInKB * 1024; 25: var blocks = []; 26: var offset = 0; 27: var index = 0; 28: var list = ""; 29: while (offset < fileSize) { 30: var start = offset; 31: var end = Math.min(offset + blockSize, fileSize); 32:  33: blocks.push({ 34: name: fileName, 35: index: index, 36: start: start, 37: end: end 38: }); 39: list += index + ","; 40:  41: offset = end; 42: index++; 43: } 44:  45: // define the function array and push all chunk upload operation into this array 46: var putBlocks = []; 47: blocks.forEach(function (block) { 48: putBlocks.push(function (callback) { 49: // load blob based on the start and end index for each chunks 50: var blob = file.slice(block.start, block.end); 51: // put the file name, index and blob into a temporary from 52: var fd = new FormData(); 53: fd.append("name", block.name); 54: fd.append("index", block.index); 55: fd.append("file", blob); 56: // post the form to backend service (asp.net mvc controller action) 57: $.ajax({ 58: url: "/Home/UploadInFormData", 59: data: fd, 60: processData: false, 61: contentType: "multipart/form-data", 62: type: "POST", 63: success: function (result) { 64: if (!result.success) { 65: alert(result.error); 66: } 67: callback(null, block.index); 68: } 69: }); 70: }); 71: }); 72:  73: // invoke the functions one by one 74: // then invoke the commit ajax call to put blocks into blob in azure storage 75: async.series(putBlocks, function (error, result) { 76: var data = { 77: name: fileName, 78: list: list 79: }; 80: $.post("/Home/Commit", data, function (result) { 81: if (!result.success) { 82: alert(result.error); 83: } 84: else { 85: alert("done!"); 86: } 87: }); 88: }); 89: } 90: }); 91: }); 92: </script> And below is the full ASP.NET MVC controller code. 1: public class HomeController : Controller 2: { 3: private CloudStorageAccount _account; 4: private CloudBlobClient _client; 5:  6: public HomeController() 7: : base() 8: { 9: _account = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("DataConnectionString")); 10: _client = _account.CreateCloudBlobClient(); 11: } 12:  13: public ActionResult Index() 14: { 15: ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; 16:  17: return View(); 18: } 19:  20: [HttpPost] 21: public JsonResult UploadInFormData() 22: { 23: var error = string.Empty; 24: try 25: { 26: var name = Request.Form["name"]; 27: var index = int.Parse(Request.Form["index"]); 28: var file = Request.Files[0]; 29: var id = Convert.ToBase64String(BitConverter.GetBytes(index)); 30:  31: var container = _client.GetContainerReference("test"); 32: container.CreateIfNotExists(); 33: var blob = container.GetBlockBlobReference(name); 34: blob.PutBlock(id, file.InputStream, null); 35: } 36: catch (Exception e) 37: { 38: error = e.ToString(); 39: } 40:  41: return new JsonResult() 42: { 43: Data = new 44: { 45: success = string.IsNullOrWhiteSpace(error), 46: error = error 47: } 48: }; 49: } 50:  51: [HttpPost] 52: public JsonResult Commit() 53: { 54: var error = string.Empty; 55: try 56: { 57: var name = Request.Form["name"]; 58: var list = Request.Form["list"]; 59: var ids = list 60: .Split(',') 61: .Where(id => !string.IsNullOrWhiteSpace(id)) 62: .Select(id => Convert.ToBase64String(BitConverter.GetBytes(int.Parse(id)))) 63: .ToArray(); 64:  65: var container = _client.GetContainerReference("test"); 66: container.CreateIfNotExists(); 67: var blob = container.GetBlockBlobReference(name); 68: blob.PutBlockList(ids); 69: } 70: catch (Exception e) 71: { 72: error = e.ToString(); 73: } 74:  75: return new JsonResult() 76: { 77: Data = new 78: { 79: success = string.IsNullOrWhiteSpace(error), 80: error = error 81: } 82: }; 83: } 84: } And if we selected a file from the browser we will see our application will upload chunks in the size we specified to the server through ajax call in background, and then commit all chunks in one blob. Then we can find the blob in our Windows Azure Blob Storage.   Optimized by Parallel Upload In previous example we just uploaded our file in chunks. This solved the problem that ASP.NET MVC request content size limitation as well as the Windows Azure load balancer timeout. But it might introduce the performance problem since we uploaded chunks in sequence. In order to improve the upload performance we could modify our client side code a bit to make the upload operation invoked in parallel. The good news is that, “async.js” library provides the parallel execution function. If you remembered the code we invoke the service to upload chunks, it utilized “async.series” which means all functions will be executed in sequence. Now we will change this code to “async.parallel”. This will invoke all functions in parallel. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallel(putBlocks, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: }); In this way all chunks will be uploaded to the server side at the same time to maximize the bandwidth usage. This should work if the file was not very large and the chunk size was not very small. But for large file this might introduce another problem that too many ajax calls are sent to the server at the same time. So the best solution should be, upload the chunks in parallel with maximum concurrency limitation. The code below specified the concurrency limitation to 4, which means at the most only 4 ajax calls could be invoked at the same time. 1: $("#upload_button_blob").click(function () { 2: // assert the browser support html5 3: ... ... 4: // start to upload each files in chunks 5: var files = $("#upload_files")[0].files; 6: for (var i = 0; i < files.length; i++) { 7: var file = files[i]; 8: var fileSize = file.size; 9: var fileName = file.name; 10: // calculate the start and end byte index for each blocks(chunks) 11: // with the index, file name and index list for future using 12: ... ... 13: // define the function array and push all chunk upload operation into this array 14: ... ... 15: // invoke the functions one by one 16: // then invoke the commit ajax call to put blocks into blob in azure storage 17: async.parallelLimit(putBlocks, 4, function (error, result) { 18: var data = { 19: name: fileName, 20: list: list 21: }; 22: $.post("/Home/Commit", data, function (result) { 23: if (!result.success) { 24: alert(result.error); 25: } 26: else { 27: alert("done!"); 28: } 29: }); 30: }); 31: } 32: });   Summary In this post we discussed how to upload files in chunks to the backend service and then upload them into Windows Azure Blob Storage in blocks. We focused on the frontend side and leverage three new feature introduced in HTML 5 which are - File.slice: Read part of the file by specifying the start and end byte index. - Blob: File-like interface which contains the part of the file content. - FormData: Temporary form element that we can pass the chunk alone with some metadata to the backend service. Then we discussed the performance consideration of chunk uploading. Sequence upload cannot provide maximized upload speed, but the unlimited parallel upload might crash the browser and server if too many chunks. So we finally came up with the solution to upload chunks in parallel with the concurrency limitation. We also demonstrated how to utilize “async.js” JavaScript library to help us control the asynchronize call and the parallel limitation.   Regarding the chunk size and the parallel limitation value there is no “best” value. You need to test vary composition and find out the best one for your particular scenario. It depends on the local bandwidth, client machine cores and the server side (Windows Azure Cloud Service Virtual Machine) cores, memory and bandwidth. Below is one of my performance test result. The client machine was Windows 8 IE 10 with 4 cores. I was using Microsoft Cooperation Network. The web site was hosted on Windows Azure China North data center (in Beijing) with one small web role (1.7GB 1 core CPU, 1.75GB memory with 100Mbps bandwidth). The test cases were - Chunk size: 512KB, 1MB, 2MB, 4MB. - Upload Mode: Sequence, parallel (unlimited), parallel with limit (4 threads, 8 threads). - Chunk Format: base64 string, binaries. - Target file: 100MB. - Each case was tested 3 times. Below is the test result chart. Some thoughts, but not guidance or best practice: - Parallel gets better performance than series. - No significant performance improvement between parallel 4 threads and 8 threads. - Transform with binaries provides better performance than base64. - In all cases, chunk size in 1MB - 2MB gets better performance.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Node.js Adventure - When Node Flying in Wind

    - by Shaun
    In the first post of this series I mentioned some popular modules in the community, such as underscore, async, etc.. I also listed a module named “Wind (zh-CN)”, which is created by one of my friend, Jeff Zhao (zh-CN). Now I would like to use a separated post to introduce this module since I feel it brings a new async programming style in not only Node.js but JavaScript world. If you know or heard about the new feature in C# 5.0 called “async and await”, or you learnt F#, you will find the “Wind” brings the similar async programming experience in JavaScript. By using “Wind”, we can write async code that looks like the sync code. The callbacks, async stats and exceptions will be handled by “Wind” automatically and transparently.   What’s the Problem: Dense “Callback” Phobia Let’s firstly back to my second post in this series. As I mentioned in that post, when we wanted to read some records from SQL Server we need to open the database connection, and then execute the query. In Node.js all IO operation are designed as async callback pattern which means when the operation was done, it will invoke a function which was taken from the last parameter. For example the database connection opening code would be like this. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: } 8: }); And then if we need to query the database the code would be like this. It nested in the previous function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: } 14: }; 15: } 16: }); Assuming if we need to copy some data from this database to another then we need to open another connection and execute the command within the function under the query function. 1: sql.open(connectionString, function(error, conn) { 2: if(error) { 3: // some error handling code 4: } 5: else { 6: // connection opened successfully 7: conn.queryRaw(command, function(error, results) { 8: if(error) { 9: // failed to execute this command 10: } 11: else { 12: // records retrieved successfully 13: target.open(targetConnectionString, function(error, t_conn) { 14: if(error) { 15: // connect failed 16: } 17: else { 18: t_conn.queryRaw(copy_command, function(error, results) { 19: if(error) { 20: // copy failed 21: } 22: else { 23: // and then, what do you want to do now... 24: } 25: }; 26: } 27: }; 28: } 29: }; 30: } 31: }); This is just an example. In the real project the logic would be more complicated. This means our application might be messed up and the business process will be fragged by many callback functions. I would like call this “Dense Callback Phobia”. This might be a challenge how to make code straightforward and easy to read, something like below. 1: try 2: { 3: // open source connection 4: var s_conn = sqlConnect(s_connectionString); 5: // retrieve data 6: var results = sqlExecuteCommand(s_conn, s_command); 7: 8: // open target connection 9: var t_conn = sqlConnect(t_connectionString); 10: // prepare the copy command 11: var t_command = getCopyCommand(results); 12: // execute the copy command 13: sqlExecuteCommand(s_conn, t_command); 14: } 15: catch (ex) 16: { 17: // error handling 18: }   What’s the Problem: Sync-styled Async Programming Similar as the previous problem, the callback-styled async programming model makes the upcoming operation as a part of the current operation, and mixed with the error handling code. So it’s very hard to understand what on earth this code will do. And since Node.js utilizes non-blocking IO mode, we cannot invoke those operations one by one, as they will be executed concurrently. For example, in this post when I tried to copy the records from Windows Azure SQL Database (a.k.a. WASD) to Windows Azure Table Storage, if I just insert the data into table storage one by one and then print the “Finished” message, I will see the message shown before the data had been copied. This is because all operations were executed at the same time. In order to make the copy operation and print operation executed synchronously I introduced a module named “async” and the code was changed as below. 1: async.forEach(results.rows, 2: function (row, callback) { 3: var resource = { 4: "PartitionKey": row[1], 5: "RowKey": row[0], 6: "Value": row[2] 7: }; 8: client.insertEntity(tableName, resource, function (error) { 9: if (error) { 10: callback(error); 11: } 12: else { 13: console.log("entity inserted."); 14: callback(null); 15: } 16: }); 17: }, 18: function (error) { 19: if (error) { 20: error["target"] = "insertEntity"; 21: res.send(500, error); 22: } 23: else { 24: console.log("all done."); 25: res.send(200, "Done!"); 26: } 27: }); It ensured that the “Finished” message will be printed when all table entities had been inserted. But it cannot promise that the records will be inserted in sequence. It might be another challenge to make the code looks like in sync-style? 1: try 2: { 3: forEach(row in rows) { 4: var entity = { /* ... */ }; 5: tableClient.insert(tableName, entity); 6: } 7:  8: console.log("Finished"); 9: } 10: catch (ex) { 11: console.log(ex); 12: }   How “Wind” Helps “Wind” is a JavaScript library which provides the control flow with plain JavaScript for asynchronous programming (and more) without additional pre-compiling steps. It’s available in NPM so that we can install it through “npm install wind”. Now let’s create a very simple Node.js application as the example. This application will take some website URLs from the command arguments and tried to retrieve the body length and print them in console. Then at the end print “Finish”. I’m going to use “request” module to make the HTTP call simple so I also need to install by the command “npm install request”. The code would be like this. 1: var request = require("request"); 2:  3: // get the urls from arguments, the first two arguments are `node.exe` and `fetch.js` 4: var args = process.argv.splice(2); 5:  6: // main function 7: var main = function() { 8: for(var i = 0; i < args.length; i++) { 9: // get the url 10: var url = args[i]; 11: // send the http request and try to get the response and body 12: request(url, function(error, response, body) { 13: if(!error && response.statusCode == 200) { 14: // log the url and the body length 15: console.log( 16: "%s: %d.", 17: response.request.uri.href, 18: body.length); 19: } 20: else { 21: // log error 22: console.log(error); 23: } 24: }); 25: } 26: 27: // finished 28: console.log("Finished"); 29: }; 30:  31: // execute the main function 32: main(); Let’s execute this application. (I made them in multi-lines for better reading.) 1: node fetch.js 2: "http://www.igt.com/us-en.aspx" 3: "http://www.igt.com/us-en/games.aspx" 4: "http://www.igt.com/us-en/cabinets.aspx" 5: "http://www.igt.com/us-en/systems.aspx" 6: "http://www.igt.com/us-en/interactive.aspx" 7: "http://www.igt.com/us-en/social-gaming.aspx" 8: "http://www.igt.com/support.aspx" Below is the output. As you can see the finish message was printed at the beginning, and the pages’ length retrieved in a different order than we specified. This is because in this code the request command, console logging command are executed asynchronously and concurrently. Now let’s introduce “Wind” to make them executed in order, which means it will request the websites one by one, and print the message at the end.   First of all we need to import the “Wind” package and make sure the there’s only one global variant named “Wind”, and ensure it’s “Wind” instead of “wind”. 1: var Wind = require("wind");   Next, we need to tell “Wind” which code will be executed asynchronously so that “Wind” can control the execution process. In this case the “request” operation executed asynchronously so we will create a “Task” by using a build-in helps function in “Wind” named Wind.Async.Task.create. 1: var requestBodyLengthAsync = function(url) { 2: return Wind.Async.Task.create(function(t) { 3: request(url, function(error, response, body) { 4: if(error || response.statusCode != 200) { 5: t.complete("failure", error); 6: } 7: else { 8: var data = 9: { 10: uri: response.request.uri.href, 11: length: body.length 12: }; 13: t.complete("success", data); 14: } 15: }); 16: }); 17: }; The code above created a “Task” from the original request calling code. In “Wind” a “Task” means an operation will be finished in some time in the future. A “Task” can be started by invoke its start() method, but no one knows when it actually will be finished. The Wind.Async.Task.create helped us to create a task. The only parameter is a function where we can put the actual operation in, and then notify the task object it’s finished successfully or failed by using the complete() method. In the code above I invoked the request method. If it retrieved the response successfully I set the status of this task as “success” with the URL and body length. If it failed I set this task as “failure” and pass the error out.   Next, we will change the main() function. In “Wind” if we want a function can be controlled by Wind we need to mark it as “async”. This should be done by using the code below. 1: var main = eval(Wind.compile("async", function() { 2: })); When the application is running, Wind will detect “eval(Wind.compile(“async”, function” and generate an anonymous code from the body of this original function. Then the application will run the anonymous code instead of the original one. In our example the main function will be like this. 1: var main = eval(Wind.compile("async", function() { 2: for(var i = 0; i < args.length; i++) { 3: try 4: { 5: var result = $await(requestBodyLengthAsync(args[i])); 6: console.log( 7: "%s: %d.", 8: result.uri, 9: result.length); 10: } 11: catch (ex) { 12: console.log(ex); 13: } 14: } 15: 16: console.log("Finished"); 17: })); As you can see, when I tried to request the URL I use a new command named “$await”. It tells Wind, the operation next to $await will be executed asynchronously, and the main thread should be paused until it finished (or failed). So in this case, my application will be pause when the first response was received, and then print its body length, then try the next one. At the end, print the finish message.   Finally, execute the main function. The full code would be like this. 1: var request = require("request"); 2: var Wind = require("wind"); 3:  4: var args = process.argv.splice(2); 5:  6: var requestBodyLengthAsync = function(url) { 7: return Wind.Async.Task.create(function(t) { 8: request(url, function(error, response, body) { 9: if(error || response.statusCode != 200) { 10: t.complete("failure", error); 11: } 12: else { 13: var data = 14: { 15: uri: response.request.uri.href, 16: length: body.length 17: }; 18: t.complete("success", data); 19: } 20: }); 21: }); 22: }; 23:  24: var main = eval(Wind.compile("async", function() { 25: for(var i = 0; i < args.length; i++) { 26: try 27: { 28: var result = $await(requestBodyLengthAsync(args[i])); 29: console.log( 30: "%s: %d.", 31: result.uri, 32: result.length); 33: } 34: catch (ex) { 35: console.log(ex); 36: } 37: } 38: 39: console.log("Finished"); 40: })); 41:  42: main().start();   Run our new application. At the beginning we will see the compiled and generated code by Wind. Then we can see the pages were requested one by one, and at the end the finish message was printed. Below is the code Wind generated for us. As you can see the original code, the output code were shown. 1: // Original: 2: function () { 3: for(var i = 0; i < args.length; i++) { 4: try 5: { 6: var result = $await(requestBodyLengthAsync(args[i])); 7: console.log( 8: "%s: %d.", 9: result.uri, 10: result.length); 11: } 12: catch (ex) { 13: console.log(ex); 14: } 15: } 16: 17: console.log("Finished"); 18: } 19:  20: // Compiled: 21: /* async << function () { */ (function () { 22: var _builder_$0 = Wind.builders["async"]; 23: return _builder_$0.Start(this, 24: _builder_$0.Combine( 25: _builder_$0.Delay(function () { 26: /* var i = 0; */ var i = 0; 27: /* for ( */ return _builder_$0.For(function () { 28: /* ; i < args.length */ return i < args.length; 29: }, function () { 30: /* ; i ++) { */ i ++; 31: }, 32: /* try { */ _builder_$0.Try( 33: _builder_$0.Delay(function () { 34: /* var result = $await(requestBodyLengthAsync(args[i])); */ return _builder_$0.Bind(requestBodyLengthAsync(args[i]), function (result) { 35: /* console.log("%s: %d.", result.uri, result.length); */ console.log("%s: %d.", result.uri, result.length); 36: return _builder_$0.Normal(); 37: }); 38: }), 39: /* } catch (ex) { */ function (ex) { 40: /* console.log(ex); */ console.log(ex); 41: return _builder_$0.Normal(); 42: /* } */ }, 43: null 44: ) 45: /* } */ ); 46: }), 47: _builder_$0.Delay(function () { 48: /* console.log("Finished"); */ console.log("Finished"); 49: return _builder_$0.Normal(); 50: }) 51: ) 52: ); 53: /* } */ })   How Wind Works Someone may raise a big concern when you find I utilized “eval” in my code. Someone may assume that Wind utilizes “eval” to execute some code dynamically while “eval” is very low performance. But I would say, Wind does NOT use “eval” to run the code. It only use “eval” as a flag to know which code should be compiled at runtime. When the code was firstly been executed, Wind will check and find “eval(Wind.compile(“async”, function”. So that it knows this function should be compiled. Then it utilized parse-js to analyze the inner JavaScript and generated the anonymous code in memory. Then it rewrite the original code so that when the application was running it will use the anonymous one instead of the original one. Since the code generation was done at the beginning of the application was started, in the future no matter how long our application runs and how many times the async function was invoked, it will use the generated code, no need to generate again. So there’s no significant performance hurt when using Wind.   Wind in My Previous Demo Let’s adopt Wind into one of my previous demonstration and to see how it helps us to make our code simple, straightforward and easy to read and understand. In this post when I implemented the functionality that copied the records from my WASD to table storage, the logic would be like this. 1, Open database connection. 2, Execute a query to select all records from the table. 3, Recreate the table in Windows Azure table storage. 4, Create entities from each of the records retrieved previously, and then insert them into table storage. 5, Finally, show message as the HTTP response. But as the image below, since there are so many callbacks and async operations, it’s very hard to understand my logic from the code. Now let’s use Wind to rewrite our code. First of all, of course, we need the Wind package. Then we need to include the package files into project and mark them as “Copy always”. Add the Wind package into the source code. Pay attention to the variant name, you must use “Wind” instead of “wind”. 1: var express = require("express"); 2: var async = require("async"); 3: var sql = require("node-sqlserver"); 4: var azure = require("azure"); 5: var Wind = require("wind"); Now we need to create some async functions by using Wind. All async functions should be wrapped so that it can be controlled by Wind which are open database, retrieve records, recreate table (delete and create) and insert entity in table. Below are these new functions. All of them are created by using Wind.Async.Task.create. 1: sql.openAsync = function (connectionString) { 2: return Wind.Async.Task.create(function (t) { 3: sql.open(connectionString, function (error, conn) { 4: if (error) { 5: t.complete("failure", error); 6: } 7: else { 8: t.complete("success", conn); 9: } 10: }); 11: }); 12: }; 13:  14: sql.queryAsync = function (conn, query) { 15: return Wind.Async.Task.create(function (t) { 16: conn.queryRaw(query, function (error, results) { 17: if (error) { 18: t.complete("failure", error); 19: } 20: else { 21: t.complete("success", results); 22: } 23: }); 24: }); 25: }; 26:  27: azure.recreateTableAsync = function (tableName) { 28: return Wind.Async.Task.create(function (t) { 29: client.deleteTable(tableName, function (error, successful, response) { 30: console.log("delete table finished"); 31: client.createTableIfNotExists(tableName, function (error, successful, response) { 32: console.log("create table finished"); 33: if (error) { 34: t.complete("failure", error); 35: } 36: else { 37: t.complete("success", null); 38: } 39: }); 40: }); 41: }); 42: }; 43:  44: azure.insertEntityAsync = function (tableName, entity) { 45: return Wind.Async.Task.create(function (t) { 46: client.insertEntity(tableName, entity, function (error, entity, response) { 47: if (error) { 48: t.complete("failure", error); 49: } 50: else { 51: t.complete("success", null); 52: } 53: }); 54: }); 55: }; Then in order to use these functions we will create a new function which contains all steps for data copying. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: } 4: catch (ex) { 5: console.log(ex); 6: res.send(500, "Internal error."); 7: } 8: })); Let’s execute steps one by one with the “$await” keyword introduced by Wind so that it will be invoked in sequence. First is to open the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: } 7: catch (ex) { 8: console.log(ex); 9: res.send(500, "Internal error."); 10: } 11: })); Then retrieve all records from the database connection. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: } 10: catch (ex) { 11: console.log(ex); 12: res.send(500, "Internal error."); 13: } 14: })); After recreated the table, we need to create the entities and insert them into table storage. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: } 24: } 25: catch (ex) { 26: console.log(ex); 27: res.send(500, "Internal error."); 28: } 29: })); Finally, send response back to the browser. 1: var copyRecords = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage one by one 14: for (var i = 0; i < results.rows.length; i++) { 15: var entity = { 16: "PartitionKey": results.rows[i][1], 17: "RowKey": results.rows[i][0], 18: "Value": results.rows[i][2] 19: }; 20: $await(azure.insertEntityAsync(tableName, entity)); 21: console.log("entity inserted"); 22: } 23: // send response 24: console.log("all done"); 25: res.send(200, "All done!"); 26: } 27: } 28: catch (ex) { 29: console.log(ex); 30: res.send(500, "Internal error."); 31: } 32: })); If we compared with the previous code we will find now it became more readable and much easy to understand. It’s very easy to know what this function does even though without any comments. When user go to URL “/was/copyRecords” we will execute the function above. The code would be like this. 1: app.get("/was/copyRecords", function (req, res) { 2: copyRecords(req, res).start(); 3: }); And below is the logs printed in local compute emulator console. As we can see the functions executed one by one and then finally the response back to me browser.   Scaffold Functions in Wind Wind provides not only the async flow control and compile functions, but many scaffold methods as well. We can build our async code more easily by using them. I’m going to introduce some basic scaffold functions here. In the code above I created some functions which wrapped from the original async function such as open database, create table, etc.. All of them are very similar, created a task by using Wind.Async.Task.create, return error or result object through Task.complete function. In fact, Wind provides some functions for us to create task object from the original async functions. If the original async function only has a callback parameter, we can use Wind.Async.Binding.fromCallback method to get the task object directly. For example the code below returned the task object which wrapped the file exist check function. 1: var Wind = require("wind"); 2: var fs = require("fs"); 3:  4: fs.existsAsync = Wind.Async.Binding.fromCallback(fs.exists); In Node.js a very popular async function pattern is that, the first parameter in the callback function represent the error object, and the other parameters is the return values. In this case we can use another build-in function in Wind named Wind.Async.Binding.fromStandard. For example, the open database function can be created from the code below. 1: sql.openAsync = Wind.Async.Binding.fromStandard(sql.open); 2:  3: /* 4: sql.openAsync = function (connectionString) { 5: return Wind.Async.Task.create(function (t) { 6: sql.open(connectionString, function (error, conn) { 7: if (error) { 8: t.complete("failure", error); 9: } 10: else { 11: t.complete("success", conn); 12: } 13: }); 14: }); 15: }; 16: */ When I was testing the scaffold functions under Wind.Async.Binding I found for some functions, such as the Azure SDK insert entity function, cannot be processed correctly. So I personally suggest writing the wrapped method manually.   Another scaffold method in Wind is the parallel tasks coordination. In this example, the steps of open database, retrieve records and recreated table should be invoked one by one, but it can be executed in parallel when copying data from database to table storage. In Wind there’s a scaffold function named Task.whenAll which can be used here. Task.whenAll accepts a list of tasks and creates a new task. It will be returned only when all tasks had been completed, or any errors occurred. For example in the code below I used the Task.whenAll to make all copy operation executed at the same time. 1: var copyRecordsInParallel = eval(Wind.compile("async", function (req, res) { 2: try { 3: // connect to the windows azure sql database 4: var conn = $await(sql.openAsync(connectionString)); 5: console.log("connection opened"); 6: // retrieve all records from database 7: var results = $await(sql.queryAsync(conn, "SELECT * FROM [Resource]")); 8: console.log("records selected. count = %d", results.rows.length); 9: if (results.rows.length > 0) { 10: // recreate the table 11: $await(azure.recreateTableAsync(tableName)); 12: console.log("table created"); 13: // insert records in table storage in parallal 14: var tasks = new Array(results.rows.length); 15: for (var i = 0; i < results.rows.length; i++) { 16: var entity = { 17: "PartitionKey": results.rows[i][1], 18: "RowKey": results.rows[i][0], 19: "Value": results.rows[i][2] 20: }; 21: tasks[i] = azure.insertEntityAsync(tableName, entity); 22: } 23: $await(Wind.Async.Task.whenAll(tasks)); 24: // send response 25: console.log("all done"); 26: res.send(200, "All done!"); 27: } 28: } 29: catch (ex) { 30: console.log(ex); 31: res.send(500, "Internal error."); 32: } 33: })); 34:  35: app.get("/was/copyRecordsInParallel", function (req, res) { 36: copyRecordsInParallel(req, res).start(); 37: });   Besides the task creation and coordination, Wind supports the cancellation solution so that we can send the cancellation signal to the tasks. It also includes exception solution which means any exceptions will be reported to the caller function.   Summary In this post I introduced a Node.js module named Wind, which created by my friend Jeff Zhao. As you can see, different from other async library and framework, adopted the idea from F# and C#, Wind utilizes runtime code generation technology to make it more easily to write async, callback-based functions in a sync-style way. By using Wind there will be almost no callback, and the code will be very easy to understand. Currently Wind is still under developed and improved. There might be some problems but the author, Jeff, should be very happy and enthusiastic to learn your problems, feedback, suggestion and comments. You can contact Jeff by - Email: [email protected] - Group: https://groups.google.com/d/forum/windjs - GitHub: https://github.com/JeffreyZhao/wind/issues   Source code can be download here.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How to tell endianness from this output?

    - by Nick Rosencrantz
    I'm running this example program and I'm suppossed to be able to tell from the output what machine type it is. I'm certain it's from inspecting one or two values but how should I perform this inspection? /* pointers.c - Test pointers * Written 2012 by F Lundevall * Copyright abandoned. This file is in the public domain. * * To make this program work on as many systems as possible, * addresses are converted to unsigned long when printed. * The 'l' in formatting-codes %ld and %lx means a long operand. */ #include <stdio.h> #include <stdlib.h> int * ip; /* Declare a pointer to int, a.k.a. int pointer. */ char * cp; /* Pointer to char, a.k.a. char pointer. */ /* Declare fp as a pointer to function, where that function * has one parameter of type int and returns an int. * Use cdecl to get the syntax right, http://cdecl.org/ */ int ( *fp )( int ); int val1 = 111111; int val2 = 222222; int ia[ 17 ]; /* Declare an array of 17 ints, numbered 0 through 16. */ char ca[ 17 ]; /* Declare an array of 17 chars. */ int fun( int parm ) { printf( "Function fun called with parameter %d\n", parm ); return( parm + 1 ); } /* Main function. */ int main() { printf( "Message PT.01 from pointers.c: Hello, pointy World!\n" ); /* Do some assignments. */ ip = &val1; cp = &val2; /* The compiler should warn you about this. */ fp = fun; ia[ 0 ] = 11; /* First element. */ ia[ 1 ] = 17; ia[ 2 ] = 3; ia[ 16 ] = 58; /* Last element. */ ca[ 0 ] = 11; /* First element. */ ca[ 1 ] = 17; ca[ 2 ] = 3; ca[ 16 ] = 58; /* Last element. */ printf( "PT.02: val1: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val1, val1, val1 ); printf( "PT.03: val2: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val2, val2, val2 ); printf( "PT.04: ip: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &ip, (long) ip, (long) ip ); printf( "PT.05: Dereference pointer ip and we find: %d \n", *ip ); printf( "PT.06: cp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &cp, (long) cp, (long) cp ); printf( "PT.07: Dereference pointer cp and we find: %d \n", *cp ); *ip = 1234; printf( "\nPT.08: Executed *ip = 1234; \n" ); printf( "PT.09: val1: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val1, val1, val1 ); printf( "PT.10: ip: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &ip, (long) ip, (long) ip ); printf( "PT.11: Dereference pointer ip and we find: %d \n", *ip ); printf( "PT.12: val1: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val1, val1, val1 ); *cp = 1234; /* The compiler should warn you about this. */ printf( "\nPT.13: Executed *cp = 1234; \n" ); printf( "PT.14: val2: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val2, val2, val2 ); printf( "PT.15: cp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &cp, (long) cp, (long) cp ); printf( "PT.16: Dereference pointer cp and we find: %d \n", *cp ); printf( "PT.17: val2: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val2, val2, val2 ); ip = ia; printf( "\nPT.18: Executed ip = ia; \n" ); printf( "PT.19: ia[0]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ia[0], ia[0], ia[0] ); printf( "PT.20: ia[1]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ia[1], ia[1], ia[1] ); printf( "PT.21: ip: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &ip, (long) ip, (long) ip ); printf( "PT.22: Dereference pointer ip and we find: %d \n", *ip ); ip = ip + 1; /* add 1 to pointer */ printf( "\nPT.23: Executed ip = ip + 1; \n" ); printf( "PT.24: ip: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &ip, (long) ip, (long) ip ); printf( "PT.25: Dereference pointer ip and we find: %d \n", *ip ); cp = ca; printf( "\nPT.26: Executed cp = ca; \n" ); printf( "PT.27: ca[0]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ca[0], ca[0], ca[0] ); printf( "PT.28: ca[1]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ca[1], ca[1], ca[1] ); printf( "PT.29: cp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &cp, (long) cp, (long) cp ); printf( "PT.30: Dereference pointer cp and we find: %d \n", *cp ); cp = cp + 1; /* add 1 to pointer */ printf( "\nPT.31: Executed cp = cp + 1; \n" ); printf( "PT.32: cp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &cp, (long) cp, (long) cp ); printf( "PT.33: Dereference pointer cp and we find: %d \n", *cp ); ip = ca; /* The compiler should warn you about this. */ printf( "\nPT.34: Executed ip = ca; \n" ); printf( "PT.35: ca[0]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ca[0], ca[0], ca[0] ); printf( "PT.36: ca[1]: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &ca[1], ca[1], ca[1] ); printf( "PT.37: ip: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &ip, (long) ip, (long) ip ); printf( "PT.38: Dereference pointer ip and we find: %d \n", *ip ); cp = ia; /* The compiler should warn you about this. */ printf( "\nPT.39: Executed cp = ia; \n" ); printf( "PT.40: cp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &cp, (long) cp, (long) cp ); printf( "PT.41: Dereference pointer cp and we find: %d \n", *cp ); printf( "\nPT.42: fp: stored at %lx (hex); value is %ld (dec), %lx (hex)\n", (long) &fp, (long) fp, (long) fp ); printf( "PT.43: Dereference fp and see what happens.\n" ); val1 = (*fp)(42); printf( "PT.44: Executed val1 = (*fp)(42); \n" ); printf( "PT.45: val1: stored at %lx (hex); value is %d (dec), %x (hex)\n", (long) &val1, val1, val1 ); return( 0 ); } Output Message PT.01 from pointers.c: Hello, pointy World! PT.02: val1: stored at 21e50 (hex); value is 111111 (dec), 1b207 (hex) PT.03: val2: stored at 21e54 (hex); value is 222222 (dec), 3640e (hex) PT.04: ip: stored at 21eb8 (hex); value is 138832 (dec), 21e50 (hex) PT.05: Dereference pointer ip and we find: 111111 PT.06: cp: stored at 21e6c (hex); value is 138836 (dec), 21e54 (hex) PT.07: Dereference pointer cp and we find: 0 PT.08: Executed *ip = 1234; PT.09: val1: stored at 21e50 (hex); value is 1234 (dec), 4d2 (hex) PT.10: ip: stored at 21eb8 (hex); value is 138832 (dec), 21e50 (hex) PT.11: Dereference pointer ip and we find: 1234 PT.12: val1: stored at 21e50 (hex); value is 1234 (dec), 4d2 (hex) PT.13: Executed *cp = 1234; PT.14: val2: stored at 21e54 (hex); value is -771529714 (dec), d203640e (hex) PT.15: cp: stored at 21e6c (hex); value is 138836 (dec), 21e54 (hex) PT.16: Dereference pointer cp and we find: -46 PT.17: val2: stored at 21e54 (hex); value is -771529714 (dec), d203640e (hex) PT.18: Executed ip = ia; PT.19: ia[0]: stored at 21e74 (hex); value is 11 (dec), b (hex) PT.20: ia[1]: stored at 21e78 (hex); value is 17 (dec), 11 (hex) PT.21: ip: stored at 21eb8 (hex); value is 138868 (dec), 21e74 (hex) PT.22: Dereference pointer ip and we find: 11 PT.23: Executed ip = ip + 1; PT.24: ip: stored at 21eb8 (hex); value is 138872 (dec), 21e78 (hex) PT.25: Dereference pointer ip and we find: 17 PT.26: Executed cp = ca; PT.27: ca[0]: stored at 21e58 (hex); value is 11 (dec), b (hex) PT.28: ca[1]: stored at 21e59 (hex); value is 17 (dec), 11 (hex) PT.29: cp: stored at 21e6c (hex); value is 138840 (dec), 21e58 (hex) PT.30: Dereference pointer cp and we find: 11 PT.31: Executed cp = cp + 1; PT.32: cp: stored at 21e6c (hex); value is 138841 (dec), 21e59 (hex) PT.33: Dereference pointer cp and we find: 17 PT.34: Executed ip = ca; PT.35: ca[0]: stored at 21e58 (hex); value is 11 (dec), b (hex) PT.36: ca[1]: stored at 21e59 (hex); value is 17 (dec), 11 (hex) PT.37: ip: stored at 21eb8 (hex); value is 138840 (dec), 21e58 (hex) PT.38: Dereference pointer ip and we find: 185664256 PT.39: Executed cp = ia; PT.40: cp: stored at 21e6c (hex); value is 138868 (dec), 21e74 (hex) PT.41: Dereference pointer cp and we find: 0 PT.42: fp: stored at 21e70 (hex); value is 69288 (dec), 10ea8 (hex) PT.43: Dereference fp and see what happens. Function fun called with parameter 42 PT.44: Executed val1 = (*fp)(42); PT.45: val1: stored at 21e50 (hex); value is 43 (dec), 2b (hex)

    Read the article

  • How can I reset addAttributeToFilter in Magento searches

    - by Bobby
    I'm having problems getting the addAttributeToFilter function within a loop to behave in Magento. I have test data in my store to support searches for all of the following data; $attributeSelections=array( array('size' => 44, 'color' => 67, 'manufacturer' => 17), array('size' => 43, 'color' => 69, 'manufacturer' => 17), array('size' => 42, 'color' => 70, 'manufacturer' => 17)); And my code to search through these combinations; foreach ($attributeSelections as $selection) { $searcher = Mage::getSingleton('catalogsearch/advanced')->getProductCollection(); foreach ($selection as $k => $v) { $searcher->addAttributeToFilter("$k", array('eq' => "$v")); echo "$k: $v<br />"; } $result=$searcher->getData(); print_r($result); } This loop gives the following results (slightly sanitised for veiwing pleasure); size: 44 color: 67 manufacturer: 17 Array ( [0] => Array ( [entity_id] => 2965 [entity_type_id] => 4 [attribute_set_id] => 28 [type_id] => simple [sku] => 1006-0001 [size] => 44 [color] => 67 [manufacturer] => 17 ) ) size: 43 color: 69 manufacturer: 17 Array ( [0] => Array ( [entity_id] => 2965 [entity_type_id] => 4 [attribute_set_id] => 28 [type_id] => simple [sku] => 1006-0001 [size] => 44 [color] => 67 [manufacturer] => 17 ) ) size: 42 color: 70 manufacturer: 17 Array ( [0] => Array ( [entity_id] => 2965 [entity_type_id] => 4 [attribute_set_id] => 28 [type_id] => simple [sku] => 1006-0001 [size] => 44 [color] => 67 [manufacturer] => 17 ) ) So my loop is function and generating the search. However, the values fed into addAttributeToFilter on the first itteration of the loop seem to remain stored for each search. I've tried clearing my search object, for example, unset($searcher) and unset($result). I've also tried magento functions such as getNewEmptyItem(), resetData(), distinct() and clear() but none have the desired effect. Basically what I am trying to do is check for duplicate products before my script attempts to programatically create a product with these attribute combinations. The array of attribute selections may be of varying sizes hence the need for a loop. I would be very appreiciative if anyone might be able to shed some light on my problem.

    Read the article

  • Tomcat v7.0 failed to start on Eclipse

    - by Questions
    Whenever I am running a program on Eclipse, my tomcat is failing to start. I get a dialog box stating this message Server Tomcat v7.0 Server at localhost failed to start. In the console, I am getting this messages Nov 17, 2011 11:56:04 AM org.apache.catalina.core.AprLifecycleListener init INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: C:\Program Files\Java\jre6\bin;C:\WINDOWS\Sun\Java\bin;C:\WINDOWS\system32;C:\WINDOWS;C:/Program Files/Java/jre6/bin/client;C:/Program Files/Java/jre6/bin;C:/Program Files/Java/jre6/lib/i386;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files\Intel\WiFi\bin\;C:\Program Files\Java\jdk1.6.0_01\bin;C:\Program Files\Java\jre1.6.0_01\bin;C:\ILOG\CPLEX_Studio_Preview122\opl\bin\x86_win32;C:\ILOG\CPLEX_Studio_Preview122\opl\oplide\;C:\ILOG\CPLEX_Studio_Preview122\cplex\bin\x86_win32;C:\ILOG\CPLEX_Studio_Preview122\cpoptimizer\bin\x86_win32;;D:\JSP_Setup\eclipse-jee-helios-SR2-win32\eclipse;;. Nov 17, 2011 11:56:04 AM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.jee.server:servletConfigTest' did not find a matching property. Nov 17, 2011 11:56:04 AM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.jee.server:BeerAdvice3' did not find a matching property. Nov 17, 2011 11:56:04 AM org.apache.tomcat.util.digester.SetPropertiesRule begin WARNING: [SetPropertiesRule]{Server/Service/Engine/Host/Context} Setting property 'source' to 'org.eclipse.jst.jee.server:BeerAdvice1' did not find a matching property. Nov 17, 2011 11:56:05 AM org.apache.coyote.http11.Http11Protocol init INFO: Initializing Coyote HTTP/1.1 on http-8080 Nov 17, 2011 11:56:05 AM org.apache.coyote.ajp.AjpProtocol init INFO: Initializing Coyote AJP/1.3 on ajp-8009 Nov 17, 2011 11:56:05 AM org.apache.catalina.startup.Catalina load INFO: Initialization processed in 682 ms Nov 17, 2011 11:56:05 AM org.apache.catalina.core.StandardService startInternal INFO: Starting service Catalina Nov 17, 2011 11:56:05 AM org.apache.catalina.core.StandardEngine startInternal INFO: Starting Servlet Engine: Apache Tomcat/7.0.2 java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) at java.lang.reflect.Method.invoke(Unknown Source) at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:288) at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:415) Caused by: java.lang.IllegalArgumentException: Invalid <url-pattern> entry_checking in servlet mapping at org.apache.catalina.core.StandardContext.addServletMapping(StandardContext.java:2823) at org.apache.catalina.core.StandardContext.addServletMapping(StandardContext.java:2799) at org.apache.catalina.deploy.WebXml.configureContext(WebXml.java:1278) at org.apache.catalina.startup.ContextConfig.webConfig(ContextConfig.java:1255) at org.apache.catalina.startup.ContextConfig.configureStart(ContextConfig.java:878) at org.apache.catalina.startup.ContextConfig.lifecycleEvent(ContextConfig.java:313) at org.apache.catalina.util.LifecycleSupport.fireLifecycleEvent(LifecycleSupport.java:119) at org.apache.catalina.util.LifecycleBase.fireLifecycleEvent(LifecycleBase.java:89) at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:4667) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:139) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:988) at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:771) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:139) at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:988) at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:275) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:139) at org.apache.catalina.core.StandardService.startInternal(StandardService.java:427) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:139) at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:649) at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:139) at org.apache.catalina.startup.Catalina.start(Catalina.java:585) ... 6 more I had run the code previously, but now it is giving this problems. Also, I went through this post http://forums.opensuse.org/applications/391114-tomcat6-eclipse-not-working.html But is not helping me. Please help at the earliest.

    Read the article

  • jQuery autocomplete. Doesn't reveal existing matches.

    - by Alexander
    Hello fellow engineers. I have come across a problem I just can't solve. I am using autocomplete plugin for jQuery on an input. The HTML looks something like this: <tr id="row_house" class="no-display"> <td class="col_num">4</td> <td class="col_label">House Number</td> <td class="col_data"> <input type="text" title="House Number" name="house" id="house"/> <button class="pretty_button ui-state-default ui-corner-all button-finish">Get house info</button> </td> </tr> I am sure that this is the only id="house" field. Other fields that are before this one work fine with autocomplete, and it's basically the same algorithm (other variables, other data, other calls). So why doesn't it work like it should work with the following init. code: $("#house").autocomplete(["1/4","6","6/1","6/4","8","8/1","8/5","10","10/1","10/3","10/4","12","12/1","12/5","12/6","14","14/1","15","15/1","15/2","15/4","15/5","16","16/1","16/2","16/21","16/2B","16/3","16/4","17","17/1","17/2","17/4","17/5","17/6","17/7","17/8","18","18/1","18/2","18/3","18/5","18/95","19","19/1","19/2","19/3","19/4","19/5","19/6","19/7","19/8","20","20/1","20/2","20/3","20/4","21","21/1","21/2","21/3","21/4","22","22/9","23","23/2","23/4","24","24/1","24/2","24/3","24/A","25","25/1","25/10","25/2","25/4","25/5","25/6","25/7","25/8","25/9","26","26/1","26/6","27","27/2","28","28/1","29","29/2","29/3","29/4","30","30/1","30/2","30/3","31","31/1","31/3","32/A","33","34","34/1","34/11","34/2","34/3","35","35/1","35/2","35/4","36","36/1","36/A","37","37/1","37/2","38","38/1","38/2","39/1","39/2","39/3","39/4","40","40/1","41","41/2","42","43","44","45","45/1","45/10","45/11","45/12","45/13","45/14","45/15","45/16","45/17","45/2","45/3","45/6","45/7","45/8","45/9","46","47","47/2","49","49/1","50","51","51/1","51/2","52","53","54","55/7","66","109","122","190/8","412"], {minChars:1, mustMatch:true}).result(function(event, result, formatted) { var found=false; for(var index=0; index<HChouses.length; index++) //HChouses is the same array used for init, but each entry is paired with a database ID. if(HChouses[index][0]==result) { found=true; HChouseId=HChouses[index][1]; $("#row_house .button-finish").click(function() { QueryServer("HouseConnect","FillData",true,HChouseId); //this performs an AJAX request }); break; } if(!found) $("#row_house .button-finish").unbind("click"); }); Each time I start typing (say I press the "1" button), the text appears and gets deleted instantly. Rarely at all after repeated presses I get the list (although much shorter than it should be) But if after that I press the second digit, the whole thing disappears again. P.S. I use Firefox 3.6.3 for development.

    Read the article

  • External USB attached drive works in Windows XP but not in Windows 7. How to fix?

    - by irrational John
    Earlier this week I purchased this "N52300 EZQuest Pro" external hard drive enclosure from here. I can connect the enclosure using USB 2.0 and access the files in both NTFS partitions on the MBR partitioned drive when I use either Windows XP (SP3) or Mac OS X 10.6. So it works as expected in XP & Snow Leopard. However, the enclosure does not work in Windows 7 (Home Premium) either 64-bit or 32-bit or in Ubuntu 10.04 (kernel 2.6.32-23-generic). I'm thinking this must be a Windows 7 driver problem because the enclosure works in XP & Snow Leopard. I do know that no special drivers are required to use this enclosure. It is supported using the USB mass storage drivers included with XP and OS X. It should also work fine using the mass storage support in Windows 7, no? FWIW, I have also tried using 32-bit Windows 7 on both my desktop, a Gigabyte GA-965P-DS3 with a Pentium Dual-Core E6500 @ 2.93GHz, and on my early 2008 MacBook. I see the same failure in both cases that I see with 64-bit Windows 7. So it doesn't appear to be specific to one hardware platform. I'm hoping someone out there can help me either get the enclosure to work in Windows 7 or convince me that the enclosure hardware is bad and should be RMAed. At the moment though an RMA seems pointless since this appears to be a (Windows 7) device driver problem. I have tried to track down any updates to the mass storage drivers included with Windows 7 but have so far come up empty. Heck, I can't even figure out how to place a bug report with Microsoft since apparently the grace period for Windows 7 email support is only a few months. I came across a link to some USB troubleshooting steps in another question. I haven't had a chance to look over the suggestions on that site or try them yet. Maybe tomorrow if I have time ... ;-) I'll finish up with some more details about the problem. When I connect the enclosure using USB to Windows 7 at first it appears everything worked. Windows detects the drive and installs a driver for it. Looking in Device Manager there is an entry under the Hard Drives section with the title, Hitachi HDT721010SLA360 USB Device. When you open Windows Disk Management the first time after the enclosure has been attached the drive appears as "Not initialize" and I'm prompted to initialize it. This is bogus. After all, the drive worked fine in XP so I know it has already been initialized, partitioned, and formatted. So of course I never try to initialize it "again". (It's a 1 GB drive and I don't want to lose the data on it). Except for this first time, the drive never shows up in Disk Management again unless I uninstall the Hitachi HDT721010SLA360 USB Device entry under Hard Drives, unplug, and then replug the enclosure. If I do that then the process in the previous paragraph repeats. In Ubuntu the enclosure never shows up at all at the file system level. Below are an excerpt from kern.log and an excerpt from the result of lsusb -v after attaching the enclosure. It appears that Ubuntu at first recongnizes the enclosure and is attempting to attach it, but encounters errors which prevent it from doing so. Unfortunately, I don't know whether any of this info is useful or not. excerpt from kern.log [ 2684.240015] usb 1-2: new high speed USB device using ehci_hcd and address 22 [ 2684.393618] usb 1-2: configuration #1 chosen from 1 choice [ 2684.395399] scsi17 : SCSI emulation for USB Mass Storage devices [ 2684.395570] usb-storage: device found at 22 [ 2684.395572] usb-storage: waiting for device to settle before scanning [ 2689.390412] usb-storage: device scan complete [ 2689.390894] scsi 17:0:0:0: Direct-Access Hitachi HDT721010SLA360 ST6O PQ: 0 ANSI: 4 [ 2689.392237] sd 17:0:0:0: Attached scsi generic sg7 type 0 [ 2689.395269] sd 17:0:0:0: [sde] 1953525168 512-byte logical blocks: (1.00 TB/931 GiB) [ 2689.395632] sd 17:0:0:0: [sde] Write Protect is off [ 2689.395636] sd 17:0:0:0: [sde] Mode Sense: 11 00 00 00 [ 2689.395639] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412003] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.412009] sde: sde1 sde2 [ 2689.455759] sd 17:0:0:0: [sde] Assuming drive cache: write through [ 2689.455765] sd 17:0:0:0: [sde] Attached SCSI disk [ 2692.620017] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2707.740014] usb 1-2: device descriptor read/64, error -110 [ 2722.970103] usb 1-2: device descriptor read/64, error -110 [ 2723.200027] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2738.320019] usb 1-2: device descriptor read/64, error -110 [ 2753.550024] usb 1-2: device descriptor read/64, error -110 [ 2753.780020] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2758.810147] usb 1-2: device descriptor read/8, error -110 [ 2763.940142] usb 1-2: device descriptor read/8, error -110 [ 2764.170014] usb 1-2: reset high speed USB device using ehci_hcd and address 22 [ 2769.200141] usb 1-2: device descriptor read/8, error -110 [ 2774.330137] usb 1-2: device descriptor read/8, error -110 [ 2774.440069] usb 1-2: USB disconnect, address 22 [ 2774.440503] sd 17:0:0:0: Device offlined - not ready after error recovery [ 2774.590023] usb 1-2: new high speed USB device using ehci_hcd and address 23 [ 2789.710020] usb 1-2: device descriptor read/64, error -110 [ 2804.940020] usb 1-2: device descriptor read/64, error -110 [ 2805.170026] usb 1-2: new high speed USB device using ehci_hcd and address 24 [ 2820.290019] usb 1-2: device descriptor read/64, error -110 [ 2835.520027] usb 1-2: device descriptor read/64, error -110 [ 2835.750018] usb 1-2: new high speed USB device using ehci_hcd and address 25 [ 2840.780085] usb 1-2: device descriptor read/8, error -110 [ 2845.910079] usb 1-2: device descriptor read/8, error -110 [ 2846.140023] usb 1-2: new high speed USB device using ehci_hcd and address 26 [ 2851.170112] usb 1-2: device descriptor read/8, error -110 [ 2856.300077] usb 1-2: device descriptor read/8, error -110 [ 2856.410027] hub 1-0:1.0: unable to enumerate USB device on port 2 [ 2856.730033] usb 3-2: new full speed USB device using uhci_hcd and address 11 [ 2871.850017] usb 3-2: device descriptor read/64, error -110 [ 2887.080014] usb 3-2: device descriptor read/64, error -110 [ 2887.310011] usb 3-2: new full speed USB device using uhci_hcd and address 12 [ 2902.430021] usb 3-2: device descriptor read/64, error -110 [ 2917.660013] usb 3-2: device descriptor read/64, error -110 [ 2917.890016] usb 3-2: new full speed USB device using uhci_hcd and address 13 [ 2922.911623] usb 3-2: device descriptor read/8, error -110 [ 2928.051753] usb 3-2: device descriptor read/8, error -110 [ 2928.280013] usb 3-2: new full speed USB device using uhci_hcd and address 14 [ 2933.301876] usb 3-2: device descriptor read/8, error -110 [ 2938.431993] usb 3-2: device descriptor read/8, error -110 [ 2938.540073] hub 3-0:1.0: unable to enumerate USB device on port 2 excerpt from lsusb -v Bus 001 Device 017: ID 0dc4:0000 Macpower Peripherals, Ltd Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x0dc4 Macpower Peripherals, Ltd idProduct 0x0000 bcdDevice 0.01 iManufacturer 1 EZ QUEST iProduct 2 USB Mass Storage iSerial 3 220417 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 32 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 5 Config0 bmAttributes 0xc0 Self Powered MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 8 Mass Storage bInterfaceSubClass 6 SCSI bInterfaceProtocol 80 Bulk (Zip) iInterface 4 Interface0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Update: Results using Firewire to connect. Today I recieved a 1394b 9 pin to 1394a 6 pin cable which allowed me to connect the "EZQuest Pro" via Firewire. Everything works. When I use Firewire I can connect whether I'm using Windows 7 or Ubuntu 10.04. I even tried booting my Gigabyte desktop as an OS X 10.6.3 Hackintosh and it worked there as well. (Though if I recall correctly, it also worked when using USB 2.0 and booting OS X on the desktop. Certainly it works with USB 2.0 and my MacBook.) I believe the firmware on the device is at the latest level available, v1.07. I base this on the excerpt below from the OS X System Profiler which shows Firmware Revision: 0x107. Bottom line: It's nice that the enclosure is actually usable when I connect with Firewire. But I am still searching for an answer as to why it does not work correctly when using USB 2.0 in Windows 7 (and Ubuntu ... but really Windows 7 is my biggest concern). OXFORD IDE Device 1: Manufacturer: EZ QUEST Model: 0x0 GUID: 0x1D202E0220417 Maximum Speed: Up to 800 Mb/sec Connection Speed: Up to 400 Mb/sec Sub-units: OXFORD IDE Device 1 Unit: Unit Software Version: 0x10483 Unit Spec ID: 0x609E Firmware Revision: 0x107 Product Revision Level: ST6O Sub-units: OXFORD IDE Device 1 SBP-LUN: Capacity: 1 TB (1,000,204,886,016 bytes) Removable Media: Yes BSD Name: disk3 Partition Map Type: MBR (Master Boot Record) S.M.A.R.T. status: Not Supported

    Read the article

  • Unusually high dentry cache usage

    - by Wolfgang Stengel
    Problem A CentOS machine with kernel 2.6.32 and 128 GB physical RAM ran into trouble a few days ago. The responsible system administrator tells me that the PHP-FPM application was not responding to requests in a timely manner anymore due to swapping, and having seen in free that almost no memory was left, he chose to reboot the machine. I know that free memory can be a confusing concept on Linux and a reboot perhaps was the wrong thing to do. However, the mentioned administrator blames the PHP application (which I am responsible for) and refuses to investigate further. What I could find out on my own is this: Before the restart, the free memory (incl. buffers and cache) was only a couple of hundred MB. Before the restart, /proc/meminfo reported a Slab memory usage of around 90 GB (yes, GB). After the restart, the free memory was 119 GB, going down to around 100 GB within an hour, as the PHP-FPM workers (about 600 of them) were coming back to life, each of them showing between 30 and 40 MB in the RES column in top (which has been this way for months and is perfectly reasonable given the nature of the PHP application). There is nothing else in the process list that consumes an unusual or noteworthy amount of RAM. After the restart, Slab memory was around 300 MB If have been monitoring the system ever since, and most notably the Slab memory is increasing in a straight line with a rate of about 5 GB per day. Free memory as reported by free and /proc/meminfo decreases at the same rate. Slab is currently at 46 GB. According to slabtop most of it is used for dentry entries: Free memory: free -m total used free shared buffers cached Mem: 129048 76435 52612 0 144 7675 -/+ buffers/cache: 68615 60432 Swap: 8191 0 8191 Meminfo: cat /proc/meminfo MemTotal: 132145324 kB MemFree: 53620068 kB Buffers: 147760 kB Cached: 8239072 kB SwapCached: 0 kB Active: 20300940 kB Inactive: 6512716 kB Active(anon): 18408460 kB Inactive(anon): 24736 kB Active(file): 1892480 kB Inactive(file): 6487980 kB Unevictable: 8608 kB Mlocked: 8608 kB SwapTotal: 8388600 kB SwapFree: 8388600 kB Dirty: 11416 kB Writeback: 0 kB AnonPages: 18436224 kB Mapped: 94536 kB Shmem: 6364 kB Slab: 46240380 kB SReclaimable: 44561644 kB SUnreclaim: 1678736 kB KernelStack: 9336 kB PageTables: 457516 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 72364108 kB Committed_AS: 22305444 kB VmallocTotal: 34359738367 kB VmallocUsed: 480164 kB VmallocChunk: 34290830848 kB HardwareCorrupted: 0 kB AnonHugePages: 12216320 kB HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 5604 kB DirectMap2M: 2078720 kB DirectMap1G: 132120576 kB Slabtop: slabtop --once Active / Total Objects (% used) : 225920064 / 226193412 (99.9%) Active / Total Slabs (% used) : 11556364 / 11556415 (100.0%) Active / Total Caches (% used) : 110 / 194 (56.7%) Active / Total Size (% used) : 43278793.73K / 43315465.42K (99.9%) Minimum / Average / Maximum Object : 0.02K / 0.19K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 221416340 221416039 3% 0.19K 11070817 20 44283268K dentry 1123443 1122739 99% 0.41K 124827 9 499308K fuse_request 1122320 1122180 99% 0.75K 224464 5 897856K fuse_inode 761539 754272 99% 0.20K 40081 19 160324K vm_area_struct 437858 223259 50% 0.10K 11834 37 47336K buffer_head 353353 347519 98% 0.05K 4589 77 18356K anon_vma_chain 325090 324190 99% 0.06K 5510 59 22040K size-64 146272 145422 99% 0.03K 1306 112 5224K size-32 137625 137614 99% 1.02K 45875 3 183500K nfs_inode_cache 128800 118407 91% 0.04K 1400 92 5600K anon_vma 59101 46853 79% 0.55K 8443 7 33772K radix_tree_node 52620 52009 98% 0.12K 1754 30 7016K size-128 19359 19253 99% 0.14K 717 27 2868K sysfs_dir_cache 10240 7746 75% 0.19K 512 20 2048K filp VFS cache pressure: cat /proc/sys/vm/vfs_cache_pressure 125 Swappiness: cat /proc/sys/vm/swappiness 0 I know that unused memory is wasted memory, so this should not necessarily be a bad thing (especially given that 44 GB are shown as SReclaimable). However, apparently the machine experienced problems nonetheless, and I'm afraid the same will happen again in a few days when Slab surpasses 90 GB. Questions I have these questions: Am I correct in thinking that the Slab memory is always physical RAM, and the number is already subtracted from the MemFree value? Is such a high number of dentry entries normal? The PHP application has access to around 1.5 M files, however most of them are archives and not being accessed at all for regular web traffic. What could be an explanation for the fact that the number of cached inodes is much lower than the number of cached dentries, should they not be related somehow? If the system runs into memory trouble, should the kernel not free some of the dentries automatically? What could be a reason that this does not happen? Is there any way to "look into" the dentry cache to see what all this memory is (i.e. what are the paths that are being cached)? Perhaps this points to some kind of memory leak, symlink loop, or indeed to something the PHP application is doing wrong. The PHP application code as well as all asset files are mounted via GlusterFS network file system, could that have something to do with it? Please keep in mind that I can not investigate as root, only as a regular user, and that the administrator refuses to help. He won't even run the typical echo 2 > /proc/sys/vm/drop_caches test to see if the Slab memory is indeed reclaimable. Any insights into what could be going on and how I can investigate any further would be greatly appreciated. Updates Some further diagnostic information: Mounts: cat /proc/self/mounts rootfs / rootfs rw 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 devtmpfs /dev devtmpfs rw,relatime,size=66063000k,nr_inodes=16515750,mode=755 0 0 devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /dev/shm tmpfs rw,relatime 0 0 /dev/mapper/sysvg-lv_root / ext4 rw,relatime,barrier=1,data=ordered 0 0 /proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0 /dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0 tmpfs /phptmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 tmpfs /wsdltmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0 cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0 cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0 cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0 cgroup /cgroup/memory cgroup rw,relatime,memory 0 0 cgroup /cgroup/devices cgroup rw,relatime,devices 0 0 cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0 cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0 cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0 /etc/glusterfs/glusterfs-www.vol /var/www fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 /etc/glusterfs/glusterfs-upload.vol /var/upload fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0 172.17.39.78:/www /data/www nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 0 0 Mount info: cat /proc/self/mountinfo 16 21 0:3 / /proc rw,relatime - proc proc rw 17 21 0:0 / /sys rw,relatime - sysfs sysfs rw 18 21 0:5 / /dev rw,relatime - devtmpfs devtmpfs rw,size=66063000k,nr_inodes=16515750,mode=755 19 18 0:11 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000 20 18 0:16 / /dev/shm rw,relatime - tmpfs tmpfs rw 21 1 253:1 / / rw,relatime - ext4 /dev/mapper/sysvg-lv_root rw,barrier=1,data=ordered 22 16 0:15 / /proc/bus/usb rw,relatime - usbfs /proc/bus/usb rw 23 21 8:1 / /boot rw,relatime - ext4 /dev/sda1 rw,barrier=1,data=ordered 24 21 0:17 / /phptmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 25 21 0:18 / /wsdltmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 26 16 0:19 / /proc/sys/fs/binfmt_misc rw,relatime - binfmt_misc none rw 27 21 0:20 / /cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset 28 21 0:21 / /cgroup/cpu rw,relatime - cgroup cgroup rw,cpu 29 21 0:22 / /cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct 30 21 0:23 / /cgroup/memory rw,relatime - cgroup cgroup rw,memory 31 21 0:24 / /cgroup/devices rw,relatime - cgroup cgroup rw,devices 32 21 0:25 / /cgroup/freezer rw,relatime - cgroup cgroup rw,freezer 33 21 0:26 / /cgroup/net_cls rw,relatime - cgroup cgroup rw,net_cls 34 21 0:27 / /cgroup/blkio rw,relatime - cgroup cgroup rw,blkio 35 21 0:28 / /var/www rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-www.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 36 21 0:29 / /var/upload rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-upload.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 37 21 0:30 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs sunrpc rw 39 21 0:31 / /data/www rw,relatime - nfs 172.17.39.78:/www rw,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 GlusterFS config: cat /etc/glusterfs/glusterfs-www.vol volume remote1 type protocol/client option transport-type tcp option remote-host 172.17.39.71 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote2 type protocol/client option transport-type tcp option remote-host 172.17.39.72 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote3 type protocol/client option transport-type tcp option remote-host 172.17.39.73 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote4 type protocol/client option transport-type tcp option remote-host 172.17.39.74 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume replicate1 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote1 remote2 end-volume volume replicate2 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote3 remote4 end-volume volume distribute type cluster/distribute subvolumes replicate1 replicate2 end-volume volume iocache type performance/io-cache option cache-size 8192MB # default is 32MB subvolumes distribute end-volume volume writeback type performance/write-behind option cache-size 1024MB option window-size 1MB subvolumes iocache end-volume ### Add io-threads for parallel requisitions volume iothreads type performance/io-threads option thread-count 64 # default is 16 subvolumes writeback end-volume volume ra type performance/read-ahead option page-size 2MB option page-count 16 option force-atime-update no subvolumes iothreads end-volume

    Read the article

  • Failed to launch simulated application: iPhone Simulator failed to find the process ID of com.iAndAp

    - by Nicsoft
    Hello, I'm having this annoyning problem giving this message in the console: Failed to launch simulated application: iPhone Simulator failed to find the process ID of com.iAndApp.BlockPop. When trying to Build and Run, the application builds fine. The simulator starts but doesn't start the application. However, it manages to do something since the icon for the app is installed in the simulator. I have been searching to find the answer and tried a couple of things, none that worked. This happens for both new and old projects. I.e. when I create a new project I will receive the same error message. I've tried several projects of which all get the same error, so it's not related to the code (and successful builds proves it). Among other things I have updated to xCode 3.2.2. in order to try to solve the problem. Using Mac OSX 10.6.3. Here are the logs: 1. 2010-05-30 17.20.39 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.40 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.40 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.40 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.40 SpringBoard[15713] Can't find the translation dictionary, loadTranslationDictionaries 2010-05-30 17.20.40 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.40 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.20.41 SpringBoard[15713] Launchd returned an unexpected type or didn't return a value for job label UIKitApplication:com.iAndApp.BlockPop[0x8abd] with job key PID 2010-05-30 17.20.41 SpringBoard[15713] Unable to create CFServerConnection. Telephony state may be incorrect. 2010-05-30 17.21.10 Xcode[15496] Error launching simulated application: Error Domain=DTiPhoneSimulatorErrorDomain Code=1 UserInfo=0x200edcc00 "iPhone Simulator failed to find the process ID of com.iAndApp.BlockPop." 2. Form system.log May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: a0bc84e0 init_simulator_paths: No simulator root specified. Falling back to environment variable. May 30 17:20:39: --- last message repeated 5 times --- May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 init_simulator_paths: No simulator root specified. Falling back to environment variable. May 30 17:20:39: --- last message repeated 1 time --- May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 load_application_info: Could not load signer identity from /Users/Niklas/Library/Application Support/iPhone Simulator/3.0/Applications/1CD7E4BA-14D3-45A5-A05E-E552C04BCD4D/BlockPopLite.app/BlockPopLite May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 load_application_info: Could not load signer identity from /Users/Niklas/Library/Application Support/iPhone Simulator/3.0/Applications/62585F19-5FAD-4548-89DF-C9AE621FCCD7/SysSound.app/SysSound May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 load_application_info: Could not load signer identity from /Users/Niklas/Library/Application Support/iPhone Simulator/3.0/Applications/81AA51A5-7BFC-442F-BFF8-91E9C6EF13CD/BlockPop.app/BlockPop May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0103000 load_application_info: Could not load signer identity from /Users/Niklas/Library/Application Support/iPhone Simulator/3.0/Applications/A2DCBF96-8F15-4527-BDF1-BD90B34D401C/BlockPop.app/BlockPop May 30 17:20:39 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 init_simulator_paths: No simulator root specified. Falling back to environment variable. May 30 17:20:39: --- last message repeated 1 time --- May 30 17:20:39 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Unable to create CFServerConnection. Telephony state may be incorrect. May 30 17:20:40 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 init_simulator_paths: No simulator root specified. Falling back to environment variable. May 30 17:20:40: --- last message repeated 1 time --- May 30 17:20:40 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Unable to create CFServerConnection. Telephony state may be incorrect. May 30 17:20:40: --- last message repeated 2 times --- May 30 17:20:40 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Can't find the translation dictionary, loadTranslationDictionaries May 30 17:20:40 Niklas-Johanssons-Mac-mini mobile_installationd[15712]: b0081000 init_simulator_paths: No simulator root specified. Falling back to environment variable. May 30 17:20:40: --- last message repeated 3 times --- May 30 17:20:40 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Unable to create CFServerConnection. Telephony state may be incorrect. May 30 17:20:41: --- last message repeated 1 time --- May 30 17:20:41 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Launchd returned an unexpected type or didn't return a value for job label UIKitApplication:com.iAndApp.BlockPop[0x8abd] with job key PID May 30 17:20:41 Niklas-Johanssons-Mac-mini SpringBoard[15713]: Unable to create CFServerConnection. Telephony state may be incorrect. May 30 17:21:10 Niklas-Johanssons-Mac-mini Xcode[15496]: Error launching simulated application: Error Domain=DTiPhoneSimulatorErrorDomain Code=1 UserInfo=0x200edcc00 "iPhone Simulator failed to find the process ID of com.iAndApp.BlockPop." Where do I start? I am really stuck and would be most greatful for any help!

    Read the article

  • using performSelector

    - by zebra
    Hi all, i've my method that implement a reverseGeocoder - (void)reversing { geoCoder=[[MKReverseGeocoder alloc] initWithCoordinate:locManager.location.coordinate]; geoCoder.delegate=self; [geoCoder start]; } i recall reversing in another method with this: [self performSelector:@selector(reversing) withObject:nil afterDelay:10]; and i receive 2010-04-30 17:44:17.616 high[1167:207] Retrive City Milano 2010-04-30 17:44:17.628 high[1167:207] geocoder released 2010-04-30 17:44:18.723 high[1167:207] Error Domain=MKErrorDomain Code=4 "Operation could not be completed. (MKErrorDomain error 4.)" Program received signal: “EXC_BAD_ACCESS”. Can someone help me? :D

    Read the article

  • SQl rows to columns conversion

    - by Thihara
    Hi, I have a table ClassAttendance and I'm using MSSQL 2005 studentID--attendanceDate---------------------------------------status 1004--------2010-03-17--------------------------------------------------0 1005--------2010-03-17--------------------------------------------------1 1006--------2010-03-17--------------------------------------------------0 1007--------2010-03-17--------------------------------------------------0 1004--------2010-03-19--------------------------------------------------0 1005--------2010-03-19--------------------------------------------------1 1006--------2010-03-19--------------------------------------------------0 1007--------2010-03-19--------------------------------------------------0 1004--------2010-03-20--------------------------------------------------1 as you can see studentID is a foreign Key for a table called StudentData and attendedDate has an unknown number of rows. Can i get the output like below by using a query? I need the dates in one month to be columns and the value of the date columns will be values in the status column. The number of date records per studentID is the same its the number of dates in the attendanceDate filed that is unknown. studentID---------2010-03-17--------2010-03-19------2010-03-20 1004-----------------------------0----------------------0--------------------1 etc. This is for a creating a report so I need to do it in a query. Please help if you can.

    Read the article

  • Removing old kernel entries in Grub

    - by To Do
    I regularly delete old kernels leaving only the latest two entries using Synaptic. I'm using Precise. However in my Grub "previous Linux version" menu there are quite a few entries labelled 2.6.8. I cannot find these linux-images in Synaptic. dpkg -l | grep linux-image Gives: rc linux-image-3.0.0-17-generic 3.0.0-17.30 Linux kernel image for version 3.0.0 on x86/x86_64 ii linux-image-3.2.0-27-generic 3.2.0-27.43 Linux kernel image for version 3.2.0 on 32 bit x86 SMP ii linux-image-3.2.0-29-generic 3.2.0-29.46 Linux kernel image for version 3.2.0 on 32 bit x86 SMP ii linux-image-3.4.0-030400-generic 3.4.0-030400.201205210521 Linux kernel image for version 3.4.0 on 32 bit x86 SMP ii linux-image-generic 3.2.0.29.31 Generic Linux kernel image Sudo update-grub gives: Generating grub.cfg ... Found linux image: /boot/vmlinuz-3.4.0-030400-generic Found initrd image: /boot/initrd.img-3.4.0-030400-generic Found linux image: /boot/vmlinuz-3.2.0-29-generic Found initrd image: /boot/initrd.img-3.2.0-29-generic Found linux image: /boot/vmlinuz-3.2.0-27-generic Found initrd image: /boot/initrd.img-3.2.0-27-generic Found linux image: /boot/vmlinuz-2.6.38-11-generic Found initrd image: /boot/initrd.img-2.6.38-11-generic Found linux image: /boot/vmlinuz-2.6.38-10-generic Found initrd image: /boot/initrd.img-2.6.38-10-generic Found linux image: /boot/vmlinuz-2.6.38-8-generic Found initrd image: /boot/initrd.img-2.6.38-8-generic Found memtest86+ image: /boot/memtest86+.bin Found Windows Vista (loader) on /dev/sda1 sudo apt-get remove linux-image-2.6.8-8-generic gives: E: Unable to locate package linux-image-2.6.8-8-generic E: Couldn't find any package by regex 'linux-image-2.6.8-8-generic' My boot folder contains the following: abi-2.6.38-10-generic initrd.img-3.4.0-030400-generic abi-2.6.38-11-generic memtest86+.bin abi-2.6.38-8-generic memtest86+_multiboot.bin abi-3.2.0-27-generic System.map-2.6.38-10-generic abi-3.2.0-29-generic System.map-2.6.38-11-generic abi-3.4.0-030400-generic System.map-2.6.38-8-generic config-2.6.38-10-generic System.map-3.2.0-27-generic config-2.6.38-11-generic System.map-3.2.0-29-generic config-2.6.38-8-generic System.map-3.4.0-030400-generic config-3.2.0-27-generic vmcoreinfo-2.6.38-10-generic config-3.2.0-29-generic vmcoreinfo-2.6.38-11-generic config-3.4.0-030400-generic vmcoreinfo-2.6.38-8-generic extlinux vmlinuz-2.6.38-10-generic grub vmlinuz-2.6.38-11-generic initrd.img-2.6.38-10-generic vmlinuz-2.6.38-8-generic initrd.img-2.6.38-11-generic vmlinuz-3.2.0-27-generic initrd.img-2.6.38-8-generic vmlinuz-3.2.0-29-generic initrd.img-3.2.0-27-generic vmlinuz-3.4.0-030400-generic initrd.img-3.2.0-29-generic and ls -l /etc/grub.d yields: total 56 -rwxr-xr-x 1 root root 6715 Apr 17 20:16 00_header -rwxr-xr-x 1 root root 5522 Oct 1 2011 05_debian_theme -rwxr-xr-x 1 root root 7407 May 17 09:22 10_linux -rwxr-xr-x 1 root root 6335 Apr 17 20:16 20_linux_xen -rwxr-xr-x 1 root root 1588 May 3 2011 20_memtest86+ -rwxr-xr-x 1 root root 7603 Apr 17 20:16 30_os-prober -rwxr-xr-x 1 root root 214 Oct 1 2011 40_custom -rwxr-xr-x 1 root root 95 Oct 1 2011 41_custom -rw-r--r-- 1 root root 483 Oct 1 2011 README gdisk -l /dev/sda yields: Partition table scan: MBR: MBR only BSD: not present APM: not present GPT: not present *************************************************************** Found invalid GPT and valid MBR; converting MBR to GPT format. *************************************************************** Disk /dev/sda: 312581808 sectors, 149.1 GiB Logical sector size: 512 bytes Disk identifier (GUID): F832A498-05E1-4615-B5B1-757ACB4A757A Partition table holds up to 128 entries First usable sector is 34, last usable sector is 312581774 Partitions will be aligned on 2048-sector boundaries Total free space is 4183661 sectors (2.0 GiB) Number Start (sector) End (sector) Size Code Name 1 2048 61442047 29.3 GiB 0700 Microsoft basic data 3 163842048 169986047 2.9 GiB 8200 Linux swap 4 169986048 312578047 68.0 GiB 0700 Microsoft basic data 5 61444096 159666175 46.8 GiB 8300 Linux filesystem Please help with removing the old and inexistent kernels from Grub.

    Read the article

  • The best computer ever

    - by Jeff
    (This is a repost from my personal blog… wow… I need to write more technical stuff!) About three years and three months ago, I bought a 17" MacBook Pro, and it turned out to be the best computer I've ever owned. You might think that every computer with better specs is automatically better than the last, but that hasn't been my experience. My first one was a Sony, back in the Pentium III days, and it cost an astonishing $2,500. That was even more ridiculous in 1999 dollars. It had a dial-up modem, and a CD-ROM, built-in! It may have even played DVD's. A few years later I bought an HP, and it ended up being a pile of shit. The power connector inside came loose from the board, and on occasion would even short. In 2005, I bought a Dell, and it wasn't bad. It had a really high resolution screen (complete with dead pixels, a problem in those days), and it was the first laptop I felt I could do real work on. When 2006 rolled around, Apple started making computers with Intel CPU's, and I bought the very first one the week it came out. I used Boot Camp to run Windows. I still have it in its box somewhere, and I used it for three years. The current 17" was new in 2009. The goodness was largely rooted in having a big screen with lots of dots. This computer has been the source of hundreds of blog posts, tens of thousands of lines of code, video and photo editing, and of course, a whole lot of Web surfing. It connected to corpnet at Microsoft, WiFi in Hawaii and has presented many a deck. It has traveled with me tens of thousands of miles. Last year, I put a solid state drive in it, and it was like getting a new computer. I can boot up a Windows 7 VM in about 19 seconds. Having 8 gigs of RAM has always been fantastic. Everything about it has been fast and fun. When new, the battery (when not using VM's) could get as much as 10 hours. I can still do 7 without much trouble. After 460 charge cycles, the battery health is still between 85 and 90%. The only real negative has been the size and weight. It's only an inch thick, but naturally it's pretty big with a 17" screen. You don't get battery life like that without a huge battery, either, so it's heavy. It was never a deal breaker, but sometimes a long haul across a large airport, you know you're carrying it. Today, Apple announced a new, thinner and lighter 15" laptop, with twice the RAM and CPU cores, and four times the screen resolution. It basically handles my size and weight issues while retaining the resolution, and it still costs less than my 17" did. So I ordered one. Three years is an excellent run, but I kind of budgeted for a new workhorse this year anyway. So if you're interested in a 17" MacBook Pro with a Core 2 Duo 2.66 GHz CPU, 8 gigs of RAM and a 320 gig hard drive (sorry, I'm keeping the SSD), I have one to sell. They've apparently discontinued the 17", which is going to piss off the video community. It's in excellent condition, with a few minor scratches, but I take care of my stuff.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >