Search Results

Search found 12144 results on 486 pages for 'the old toxophilist'.

Page 65/486 | < Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >

  • Exchange 2003-Exchange 2010 post migration GAL/OAB problem

    - by user68726
    I am very new to Exchange so forgive my newbie-ness. I've exhausted Google trying to find a way to solve my problem so I'm hoping some of you gurus can shed some light on my next steps. Please forgive my bungling around through this. The problem I cannot download/update the Global Address List (GAL) and Offline Address Book (OAB) on my Outlook 2010 clients. I get: Task 'emailaddress' reported error (0x8004010F) : 'The operation failed. An object cannot be found.' ---- error. I'm using cached exchange mode, which if I turn off Outlook hangs completely from the moment I start it up. (Note I've replaced my actual email address with 'emailaddress') Background information I migrated mailboxes, public store, etc. from a Small Business Server 2003 with Exchange 2003 box to a Server 2008 R2 with Exchange 2010 based primarily on an experts exchange how to article. The exchange server is up and running as an internet facing exchange server with all of the roles necessary to send and receive mail and in that capacity is working fine. I "thought" I had successfully migrated everything from the SBS03 box, and due to huge amounts of errors in everything from AD to the Exchange install itself I removed the reference to the SBS03 server in adsiedit. I've still got access to the old SBS03 box, but as I said the number of errors in everything is preventing even the uninstall of Exchange (or the starting of the Exchange Information Store service), so I'm quite content to leave that box completely out of the picture while trying to solve my problem. After research I discovered this is most likely because I failed to run the “update-globaladdresslist” (or get / update) command from the Exchange shell before I removed the Exchange 2003 server from adsiedit (and the network). If I run the command now it gives me: WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Offline Address Book - first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/Schedule+ Free Busy Information – first administrative group" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameArchive" is invalid and couldn't be updated. WARNING: The recipient "domainname.com/Microsoft Exchange System Objects/ContainernameContacts" is invalid and couldn't be updated. (Note that I’ve replaced my domain with “domainname.com” and my organization name with “containername”) What I’ve tried I don’t want to use the old OAB, or GAL, I don’t care about either, our GAL and distribution lists needed to be organized anyway, so at this point I really just want to get rid of the old reference to the “first administrative group” and move on. I’ve tried to create a new GAL and tell Exchange 2010 to use that GAL instead of the old GAL, but I'm obviously missing some of the commands or something dumb I need to do to start over with a blank slate/GAL/OAB. I'm very tempted to completely delete the entire "first administrative group" tree from adsiedit and see if that gets rid of the ridiculous reference that no longer exists but I dont want to break something else. Commands run to try to create a new GAL and tell exch10 to use that GAL: New-globaladdresslist –name NAMEOFNEWGAL Set-globaladdresslist GUID –name NAMEOFNEWGAL This did nothing for me except now when I run get-globaladdresslist or with the | FL pipe I see two GALs listed, the “default global address list” and the “NAMEOFNEWGAL” that I created. After a little more research this morning it looks like you can't change/delete/remove the default address list, and the only way to do what I'm trying to do would be to maybe remove the default address list via adsiedit and recreate with a command something like new-GlobalAddressList -Name "Default Global Address List" -IncludedRecipients AllRecipients. This would be acceptable but I've searched and searched and can't find instructions or a breakdown of where exactly the default GAL lives in AD, and if I'd have to remove multiple child references/records. Of interest I'm getting an event ID 9337 in my application log OALGen did not find any recipients in address list \Global Address List. This offline address list will not be generated. -\NAMEOFMYOAB --------- on my Exchange 2010 box, which pretty much to me seems to confirm my suspicion that the empty GAL/OAB is what's causing the Outlook client 0x8004010F error. Help please!

    Read the article

  • Postfix not sending/allowing receiving of messages after server (hardware) changed

    - by 537mfb
    We had na old notebook runing Ubuntu 12.04 working as a web/ftp/mail server and it worked but since the notebook was a notebook and pretty old and unreliable, a desktop was bought to replace it before it stopped working all together. Due to issues with the new desktop's vídeo card, we couldn't use Ubuntu 12.04 so we installed Ubuntu 13.10 and wen't about configuring it. Since we removed the notebook from the network, we kept the same Computer Name and local IP address to make things as close to the old server as possible configuration-wise. However, something has gone wrong since Postfix is throwing error 451 4.3.0 lookup faillure on every attempt to send a mail, and no email can be received either. Our main.cf file is a copy of the one we were using (and working) on the old server (notice we use EHCP) # See /usr/share/postfix/main.cf.dist for a commented, more complete version # Debian specific: Specifying a file name will cause the first # line of that file to be used as the name. The Debian default # is /etc/mailname. #myorigin = /etc/mailname smtpd_banner = $myhostname ESMTP $mail_name powered by Easy Hosting Control Panel (ehcp) on Ubuntu, www.ehcp.net biff = no # appending .domain is the MUA's job. append_dot_mydomain = no # Uncomment the next line to generate "delayed mail" warnings #delay_warning_time = 4h readme_directory = no myhostname = m21-traducoes.com.pt relayhost = mydestination = localhost, 89.152.248.139 mynetworks = 127.0.0.0/8, 192.168.0.0/16, 172.16.0.0/16, 10.0.0.0/8, 89.152.248.0/24 virtual_alias_domains = virtual_alias_maps = proxy:mysql:/etc/postfix/mysql-virtual_forwardings.cf, proxy:mysql:/etc/postfix/mysql-virtual_email2email.cf transport_maps = proxy:mysql:/etc/postfix/mysql-virtual_transports.cf virtual_mailbox_domains = proxy:mysql:/etc/postfix/mysql-virtual_domains.cf virtual_mailbox_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailboxes.cf virtual_mailbox_base = /home/vmail virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_mynetworks,permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination smtp_use_tls = yes smtpd_use_tls = yes smtpd_tls_auth_only = no smtpd_tls_CAfile = /etc/postfix/cacert.pem smtpd_tls_cert_file = /etc/postfix/smtpd.cert smtpd_tls_key_file = /etc/postfix/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s tls_random_source = dev:/dev/urandom virtual_create_maildirsize = yes virtual_mailbox_extended = yes virtual_mailbox_limit_maps = proxy:mysql:/etc/postfix/mysql-virtual_mailbox_limit_maps.cf virtual_mailbox_limit_override = yes virtual_maildir_limit_message = "The user you are trying to reach is over quota." virtual_overquota_bounce = yes debug_peer_list = sender_canonical_maps = debug_peer_level = 1 proxy_read_maps = $local_recipient_maps $mydestination $virtual_alias_maps $virtual_alias_domains $virtual_mailbox_maps $virtual_mailbox_domains $relay_recipient_maps $canonical_maps $sender_canonical_maps $recipient_canonical_maps $relocated_maps $mynetworks $virtual_mailbox_limit_maps $transport_maps alias_maps = hash:/etc/aliases smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated,check_client_access hash:/var/lib/pop-before-smtp/hosts,reject_unauth_destination smtpd_destination_concurrency_limit = 2 smtpd_destination_rate_delay = 1s smtpd_extra_recipient_limit = 10 disable_vrfy_command = yes smtpd_delay_reject = yes smtpd_helo_required = yes smtpd_error_sleep_time = 1s smtpd_soft_error_limit = 10 smtpd_hard_error_limit = 20 This configuration was working before but now everytime i try to send a mail in squirrelmail it reports: Message not sent. Server replied: Requested action aborted: error in processing 451 4.3.0 <[email protected]>: Temporary lookup failure And i can't send mail to it from outsider either. Any ideas? EDIT: Here are some issues MXToolBox reports to my domain, answering hopefully to @Teun Vink: BlackList Mail Server Web Server DNS Error 4 0 2 0 Warnings 0 0 0 3 Passed 0 6 3 12 So the domain is on some blacklist, but that doesn't explain the error at all No mail server issues found (except it's not working) Those two web server errors it's because i don't have HTTPS workin (No SSL Certificate) so the test fails Those 3 DNS warnings we're already there when it was working with the other machine and are related to stuff i can't control: SOA Refresh Value is outside of the recommended range SOA Expire Value out of recommended range SOA NXDOMAIN Value too high I've searched and as far as i can tell only the guys who sold the retail can change those values and they won't. Edit2: I half solved the issue.on the new machine postfix was installed but postfix-mysql waasn't so he couldn't connect to the database (rookie mistake). After fixing that, i can now send mails to the outsider without any issues, however i am still not able to receive mails from utside. The sender doesn't get any message warning about the non-delivery but the message doesn't fall in the inbox and the log shows: Nov 13 15:11:57 m21-traducoes postfix/smtpd[5872]: NOQUEUE: reject: RCPT from re lay4.ptmail.sapo.pt[212.55.154.24]: 451 4.3.5 <relay4.ptmail.sapo.pt[212.55.154. 24]>: Client host rejected: Server configuration error; from=<[email protected]> to=<[email protected]> proto=SMTP helo=<sapo.pt> Nov 13 15:11:57 m21-traducoes postfix/smtpd[5872]: disconnect from relay4.ptmail .sapo.pt[212.55.154.24]

    Read the article

  • what is best config for nginx worker_rlimit_nofile and worker_connections 28672

    - by Binh Nguyen
    i have issue of web-brower response ( especially on ie ) very slow, some time time out, and sometime hang out up to 20 seconds for one file redirect 301 when test with "f12 derverloper tool of ie" .. it report wait/start time very long. but after got connected the elements on web weill be dowload and show out fast ( test at xaluan.com ) It most happen when active user on web more than 2100 ( use google real time live analytic ). server running cenos 5 with ngix, apache, 32core cpu, 96G ram, raid 10 sas hdd.. == flowing is my config == user nobody; # no need for more workers in the proxy mode worker_processes 28; #old 32 #good at 24 error_log /var/log/nginx/error.log; #old add in end: info worker_rlimit_nofile 22528; events { worker_connections 22528; use epoll; # you should use epoll here for Linux kernels 2.6.x } http { server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; server_name_in_redirect off; server_names_hash_max_size 10240; server_names_hash_bucket_size 1024; include mime.types; default_type application/octet-stream; server_tokens off; disable_symlinks off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 25; #old 5 gzip on; #old on gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.1; gzip_min_length 1000; gzip_comp_level 6; gzip_buffers 16 8k; ignore_invalid_headers on; client_header_timeout 1m; #3m client_body_timeout 1m; #3m send_timeout 1m; #3m reset_timedout_connection on; connection_pool_size 256; client_header_buffer_size 256k; large_client_header_buffers 4 256k; client_max_body_size 100M; client_body_buffer_size 256k; request_pool_size 32k; output_buffers 4 32k; postpone_output 1460; proxy_temp_path /tmp/nginx_proxy/; client_body_in_file_only on; log_format bytes_log "$msec $bytes_sent ."; limit_conn_zone $binary_remote_addr zone=limit_per_ip:1m; limit_conn limit_per_ip 20; limit_req_zone $binary_remote_addr zone=allips:5m rate=200r/s; limit_req zone=allips burst=200 nodelay; include "/etc/nginx/vhosts/*"; } =========== I have play around with worker config 1- tried increase as some one suggess: worker_rlimit_nofile = worker_connections = worker_processes * 1024 = 32768 2- tried to set low: worker_processes = 28 and other worker at 22582 and other solution too .. but not work cause some time it make server load hight very quick 3- tried to comment out the # worker_rlimit_nofile . so it will be unlimited. it look like solved a bit about issue response time. but it also make server high load quick in peak time... Please help thanks PS: other apache you may have look for help me out thanks Listen 0.0.0.0:8081 User nobody Group nobody ExtendedStatus On ServerAdmin [email protected] ServerName server.xaluan.com LogLevel warn # These can be set in WHM under 'Apache Global Configuration' Timeout 100 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 15 <IfModule prefork.c> MinSpareServers 20 MaxSpareServers 50 #MaxSpareServers 40 </IfModule> ServerLimit 1572 MaxClients 1572 MaxRequestsPerChild 4000 # MaxRequestsPerChild 3000 KeepAlive On KeepAliveTimeout 3 MaxKeepAliveRequests 300 #MaxKeepAliveRequests 130

    Read the article

  • What seems to be plaguing my hard drives?

    - by Craig
    In a little bit of a tech nightmare here. I know oodles about software, not so much more than the above average user about hardware. I recently had to toss an old desktop of mine. It was gradually getting slower and slower, and after shutting the desktop down for long periods of time, it would choke up upon startup. Sometimes it'd give a disk read error, sometimes say no OS was found, etc. Restarting it about 5-15 times would eventually boot properly. Weird. I also noticed that startup programs were going missing, Dropbox was reindexing my entire folder, and Backblaze was backing up less than the number of files that it should. This lead me to believe it was probably a hard drive issue. I began to wonder why I'd have hard drive issues, and came to closure when I assumed it was because of recent power surges and outages. I'm sure that does a number on the drive. I bought a new desktop recently. It's not a beast or anything, but it's enough for what I do. It's an eMachines (I know, I know) Ultra-Slim (http://www.amazon.com/eMachines-Ultra-Slim-ER1401-57-Desktop-PC/dp/B00475OG9U). This is ideal for me because it's small and portable. It comes with an AC adapter and battery, like a laptop. Just to be safe, I bought an uninterruptable power supply on top of that. It's basically protected completely from any outages that might scramble the drive. I set this up a few days ago and for the past few days I've been perfecting settings, downloading the usual applications, etc. Two days ago, I noticed Dropbox was reindexing my entire Dropbox folder. I installed both Dropbox and Backblaze on this system, but it is very much more lightweight than the other. Only about 15 third-party applications installed. I thought that maybe Dropbox and Backblaze were stressing my system, so I turned off Backblaze. Still, Dropbox runs and comes across this infinite reindex issue. I noticed that upon a reboot recently, two applications did not start on startup either. Also, much like my old desktop, every 3rd or 4th reboot, I'll be forced into a chkdsk. This makes me incredibly nervous. What could possibly be going wrong with my old, years-old desktop that is immediately causing the same issues to my new one? I've considered all of the basics. I'm in a very air conditioned room. I take run routine virus scans. I'd like to think I take care of my systems very well. What is this issue that is haunting me? There's always the possibility that this new desktop has a junk hard drive, but it just seems way too coincidental.

    Read the article

  • "Service Unavailable" when browsing to static HTML page in non-application IIS website on Windows 2003 (possibly SharePoint WSS 2.0 related?)

    - by Jordan Rieger
    Background: My client has an old Pentium III Windows 2003 server whose 16/36 GB disks are dying. On it he has a database-driven web site and email application that needs further customization by a developer (me). First we need to get it working on the new server. The original developer is no longer available to provide a system setup guide. So my client got a tech who imaged the old drives over to the new server and managed to get it booting. But the IIS-driven site no longer works. In fact it seems that IIS itself does not work. Problem: Service Unavailable when attempting to browse from the server itself to the URL for a local Web Site called test which I setup in IIS to serve a single static index.htm file. This I did to isolate the problem, and eliminate the client's application from the equation. The site is setup on port 80 with the host header "test.myclientsdomain.com", and I used the etc\hosts file to point that host at the local IP. I know the host entry took effect because I can ping it. When doing an iisreset, I get: Attempting start... Restart attempt failed. IIS Admin Service or a service dependent on IIS Admin is not active. It most likely failed to start, which may mean that it's disabled. Despite this message, the services all stay in the Started state. The only relevant System event logs I found are: Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1002 Date: 11/4/2012 Time: 11:04:47 PM User: N/A Computer: ALPHA1 Description: Application pool 'DefaultAppPool' is being automatically disabled due to a series of failures in the process(es) serving that application pool. Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1039 Date: 11/4/2012 Time: 11:13:12 PM User: N/A Computer: ALPHA1 Description: A process serving application pool 'DefaultAppPool' reported a failure. The process id was '5636'. The data field contains the error number. Data: 0000: 7e 00 07 80 ~.. And one Application event log: Event Type: Error Event Source: Windows SharePoint Services 2.0 Event Category: None Event ID: 1000 Date: 11/4/2012 Time: 11:34:04 PM User: N/A Computer: ALPHA1 Description: #50070: Unable to connect to the database STS_Config on ALPHA2\SharePoint. Check the database connection information and make sure that the database server is running. That last log tells me that the tech may have initially tried to have both the old and the new server running, by renaming the new server from ALPHA1 to ALPHA2. And perhaps SharePoint grabbed onto that change, and now can't tell that the machine name has been switched back to the old ALPHA1. But why would SharePoint interfere with a static IIS web site serving a single HTML file? The test site is not even within an Application pool (I clicked the Remove button.) What I have tried/eliminated: No relevant services seem to be disabled: IIS Admin, WWW Publishing, Sharepoint Timer Giving Full Control to All Users/Everyone on the c:\inetpub\test folder serving my test site. I can connect to and query the local SharePoint config database (ALPHA1\SHAREPOINT\STS_CONFIG) from SSMS. But when I try to do stsadm -o setconfigdb -connect -databaseserver ALPHA1\SHAREPOINT it tells me The SharePoint admininstration port does not exist. Please use stsadm.exe to create it. And when I do that, using the port 9487 specified in the IIS SharePoint Admin site config, it tells me the port is already in use. Needless to say, simply browsing to the admin site gives me a similar error about being unable to reach the config database. I didn't want to go further down the SharePoint path as it may be completed unrelated to my IIS issue, and I don't even know yet if SharePoint is required for this application to work. The app itself is ASP.Net/C#/Silverlight and a little MS Word integration (maybe that's where the SharePoint stuff comes in.)

    Read the article

  • How to Reduce the Size of Your WinSXS Folder on Windows 7 or 8

    - by Chris Hoffman
    The WinSXS folder at C:\Windows\WinSXS is massive and continues to grow the longer you have Windows installed. This folder builds up unnecessary files over time, such as old versions of system components. This folder also contains files for uninstalled, disabled Windows components. Even if you don’t have a Windows component installed, it will be present in your WinSXS folder, taking up space. Why the WinSXS Folder Gets to Big The WinSXS folder contains all Windows system components. In fact, component files elsewhere in Windows are just links to files contained in the WinSXS folder. The WinSXS folder contains every operating system file. When Windows installs updates, it drops the new Windows component in the WinSXS folder and keeps the old component in the WinSXS folder. This means that every Windows Update you install increases the size of your WinSXS folder. This allows you to uninstall operating system updates from the Control Panel, which can be useful in the case of a buggy update — but it’s a feature that’s rarely used. Windows 7 dealt with this by including a feature that allows Windows to clean up old Windows update files after you install a new Windows service pack. The idea was that the system could be cleaned up regularly along with service packs. However, Windows 7 only saw one service pack — Service Pack 1 — released in 2010. Microsoft has no intention of launching another. This means that, for more than three years, Windows update uninstallation files have been building up on Windows 7 systems and couldn’t be easily removed. Clean Up Update Files To fix this problem, Microsoft recently backported a feature from Windows 8 to Windows 7. They did this without much fanfare — it was rolled out in a typical minor operating system update, the kind that don’t generally add new features. To clean up such update files, open the Disk Cleanup wizard (tap the Windows key, type “disk cleanup” into the Start menu, and press Enter). Click the Clean up System Files button, enable the Windows Update Cleanup option and click OK. If you’ve been using your Windows 7 system for a few years, you’ll likely be able to free several gigabytes of space. The next time you reboot after doing this, Windows will take a few minutes to clean up system files before you can log in and use your desktop. If you don’t see this feature in the Disk Cleanup window, you’re likely behind on your updates — install the latest updates from Windows Update. Windows 8 and 8.1 include built-in features that do this automatically. In fact, there’s a StartComponentCleanup scheduled task included with Windows that will automatically run in the background, cleaning up components 30 days after you’ve installed them. This 30-day period gives you time to uninstall an update if it causes problems. If you’d like to manually clean up updates, you can also use the Windows Update Cleanup option in the Disk Usage window, just as you can on Windows 7. (To open it, tap the Windows key, type “disk cleanup” to perform a search, and click the “Free up disk space by removing unnecessary files” shortcut that appears.) Windows 8.1 gives you more options, allowing you to forcibly remove all previous versions of uninstalled components, even ones that haven’t been around for more than 30 days. These commands must be run in an elevated Command Prompt — in other words, start the Command Prompt window as Administrator. For example, the following command will uninstall all previous versions of components without the scheduled task’s 30-day grace period: DISM.exe /online /Cleanup-Image /StartComponentCleanup The following command will remove files needed for uninstallation of service packs. You won’t be able to uninstall any currently installed service packs after running this command: DISM.exe /online /Cleanup-Image /SPSuperseded The following command will remove all old versions of every component. You won’t be able to uninstall any currently installed service packs or updates after this completes: DISM.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase Remove Features on Demand Modern versions of Windows allow you to enable or disable Windows features on demand. You’ll find a list of these features in the Windows Features window you can access from the Control Panel. Even features you don’t have installed — that is, the features you see unchecked in this window — are stored on your hard drive in your WinSXS folder. If you choose to install them, they’ll be made available from your WinSXS folder. This means you won’t have to download anything or provide Windows installation media to install these features. However, these features take up space. While this shouldn’t matter on typical computers, users with extremely low amounts of storage or Windows server administrators who want to slim their Windows installs down to the smallest possible set of system files may want to get these files off their hard drives. For this reason, Windows 8 added a new option that allows you to remove these uninstalled components from the WinSXS folder entirely, freeing up space. If you choose to install the removed components later, Windows will prompt you to download the component files from Microsoft. To do this, open a Command Prompt window as Administrator. Use the following command to see the features available to you: DISM.exe /Online /English /Get-Features /Format:Table You’ll see a table of feature names and their states. To remove a feature from your system, you’d use the following command, replacing NAME with the name of the feature you want to remove. You can get the feature name you need from the table above. DISM.exe /Online /Disable-Feature /featurename:NAME /Remove If you run the /GetFeatures command again, you’ll now see that the feature has a status of “Disabled with Payload Removed” instead of just “Disabled.” That’s how you know it’s not taking up space on your computer’s hard drive. If you’re trying to slim down a Windows system as much as possible, be sure to check out our lists of ways to free up disk space on Windows and reduce the space used by system files.     

    Read the article

  • Conversion of BizTalk Projects to Use the New WCF-SAP Adaptor

    - by Geordie
    We are in the process of upgrading our BizTalk Environment from BizTalk 2006 R2 to BizTalk 2010. The SAP adaptor in BizTalk 2010 is an all new and more powerful WCF-SAP adaptor. When my colleagues tested out the new adaptor they discovered that the format of the data extracted from SAP was not identical to the old adaptor. This is not a big deal if the structure of the messages from SAP is simple. In this case we were receiving the delivery and invoice iDocs. Both these structures are complex especially the delivery document. Over the past few years I have tweaked the delivery mapping to remove bugs from original mapping. The idea of redoing these maps did not appeal and due to the current work load was not even an option. I opted for a rather crude alternative of pulling in the iDoc in the new typed format and then adding a static map at the start of the orchestration to convert the data to the old schema.  Note WCF-SAP data formats (on the binding tab of the configuration dialog box is the ‘RecieiveIdocFormat’ field): Typed:  Returns a XML document with the hierarchy represented in XML and all fields being represented by XML tags. RFC: Returns an XML document with the hierarchy represented in XML but the iDoc lines in flat file format. String: This returns the iDoc in a format that is closest to the original flat file format but is still wrapped with some top level XML tags. The files also contained some strange characters at the end of each line. I started with the invoice document and it was quite straight forward to add the mapping but this is where my problems started. The orchestrations for these documents are dynamic and so require the identity of the partner to be able to correctly configure the orchestration. The partner identity is in the EDI_DC40 segment of the iDoc. In the old project the RECPRN node of the segment was promoted. The code to set a variable to the partner ID was now failing. After lot of head scratching I discovered the problem was due to the addition of Namespaces to the fields in the EDI_DC40 segment. To overcome this I needed to use an xPath query with a Namespace Manager. This had to be done in custom code. I now tried to repeat the process with the delivery document. Unfortunately when we tried to get sample typed data from SAP an exception was thrown. The adapter "WCF-SAP" raised an error message. Details "Microsoft.ServiceModel.Channels.Common.XmlReaderGenerationException: The segment or group definition E2EDKA1001 was not found in the IDoc metadata. The UniqueId of the IDoc type is: IDOCTYP/3/DESADV01/ZASNEXT1/640. For Receive operations, the SAP adapter does not support unreleased segments.   Our guess is that when the WCF-SAP adaptor tries to down load the data it retrieves a data schema from SAP. For some reason the schema does not match the data. This may be due to the version of SAP we are running or due to a customization. Either way resolving this problem did not look easy. When doing some research on this problem I found an article showing me how to get the data from SAP using the WCF-SAP adaptor without any XML tags. http://blogs.msdn.com/b/adapters/archive/2007/10/05/receiving-idocs-getting-the-raw-idoc-data.aspx Reproduction of Mustansir blog: Since the WCF based SAP Adapter is ... well, WCF based, all data flowing in and out of the adapter is encapsulated within a SOAP message. Which means there are those pesky xml tags all over the place. If you want to receive an Idoc from SAP, you can receive it in "Typed" format (in which case each column in each segment of the idoc appears within its own xml tag), or you can receive it in "String" format (in which case there are just 2 xml tags at the top, the raw xml data in string/flat file format, and the 2 closing xml tags). In "String" format, an incoming idoc (for ORDERS05, containing 5 data records) would look like: <ReceiveIdoc ><idocData>EDI_DC40 8000000000001064985620 E2EDK01005 800000000000106498500000100000001 E2EDK14 8000000000001064985000002000000020111000 E2EDK14 8000000000001064985000003000000020081000 E2EDK14 80000000000010649850000040000000200710 E2EDK14 80000000000010649850000050000000200600</idocData></ReceiveIdoc> (I have trimmed part of the control record so that it fits cleanly here on one line). Now, you're only interested in the IDOC data, and don't care much for the XML tags. It isn't that difficult to write your own pipeline component, or even some logic in the orchestration to remove the tags, right? Well, you don't need to write any extra code at all - the WCF Adapter can help you here! During the configuration of your one-way Receive Location using WCF-Custom, navigate to the Messages tab. Under the section "Inbound BizTalk Messge Body", select the "Path" radio button, and: (a) Enter the body path expression as: /*[local-name()='ReceiveIdoc']/*[local-name()='idocData'] (b) Choose "String" for the Node Encoding. What we've done is, used an XPATH to pull out the value of the "idocData" node from the XML. Your Receive Location will now emit text containing only the idoc data. You can at this point, for example, put the Flat File Pipeline component to convert the flat text into a different xml format based on some other schema you already have, and receive your version of the xml formatted message in your orchestration.   This was potentially a much easier solution than adding the static maps to the orchestrations and overcame the issue with ‘Typed’ delivery documents. Not quite so fast… Note: When I followed Mustansir’s blog the characters at the end of each line disappeared. After configuring the adaptor and passing the iDoc data into the original flat file receive pipelines I was receiving exceptions. There was a failure executing the receive pipeline: "PAPINETPipelines.DeliveryFlatFileReceive, CustomerIntegration2.PAPINET.Pipelines, Version=1.0.0.0, Culture=neutral, PublicKeyToken=4ca3635fbf092bbb" Source: "Pipeline " Receive Port: "recSAP_Delivery" URI: "D:\CustomerIntegration2\SAP\Delivery\*.xml" Reason: An error occurred when parsing the incoming document: "Unexpected data found while looking for: 'Z2EDPZ7' The current definition being parsed is E2EDP07GRP. The stream offset where the error occured is 8859. The line number where the error occured is 23. The column where the error occured is 0.". Although the new flat file looked the same as the old one there was a differences. In the original file all lines in the document were exactly 1064 character long. In the new file all lines were truncated to the last alphanumeric character. The final piece of the puzzle was to add a custom pipeline component to pad all the lines to 1064 characters. This component was added to the decode node of the custom delivery and invoice flat file disassembler pipelines. Execute method of the custom pipeline component: public IBaseMessage Execute(IPipelineContext pc, IBaseMessage inmsg) { //Convert Stream to a string Stream s = null; IBaseMessagePart bodyPart = inmsg.BodyPart;   // NOTE inmsg.BodyPart.Data is implemented only as a setter in the http adapter API and a //getter and setter for the file adapter. Use GetOriginalDataStream to get data instead. if (bodyPart != null) s = bodyPart.GetOriginalDataStream();   string newMsg = string.Empty; string strLine; try { StreamReader sr = new StreamReader(s); strLine = sr.ReadLine(); while (strLine != null) { //Execute padding code if (strLine != null) strLine = strLine.PadRight(1064, ' ') + "\r\n"; newMsg += strLine; strLine = sr.ReadLine(); } sr.Close(); } catch (IOException ex) { throw new Exception("Error occured trying to pad the message to 1064 charactors"); }   //Convert back to stream and set to Data property inmsg.BodyPart.Data = new MemoryStream(Encoding.UTF8.GetBytes(newMsg)); ; //reset the position of the stream to zero inmsg.BodyPart.Data.Position = 0; return inmsg; }

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-28

    - by Bob Rhubart
    You may be tempted by IaaS, but you should PaaS on that or your database cloud journey will be a short one "The better option [to IaaS] is to rationalize the deployment stack so that VMs are needed only for exceptional cases," says B. R. Clouse. "By settling on a standard operating system and patch level, you create an infrastructure that potentially all of your databases can share. Now, the building block will be database instances or possibly schemas within databases. These components are the platforms on which you will deploy workloads, hence this is known as Platform as a Service (PaaS)." 'Shadow IT' can be the cloud's best friend | David Linthicum "I do not advocate that IT give up control and allow business units to adopt any old technology they want," says Infoworld cloud computing blogger David Linthicum. "However, IT needs to face reality: For the past three decades or so, corporate IT has been slow on the uptake around the use of productive new technologies." Do you agree? 9 ways cloud will impact IT employment | ZDNet ZDNet blogger Joe McKendrick condenses information from a recent report on how cloud computing will impact IT jobs. Number one on the list: New categories of jobs arising from cloud computing, which include "private cloud developers and administrators, departmental liaisons, integration specialists, cloud architects, and compliance specialists." Yeah, that's right, cloud architects. For more on cloud architects, including what you need to up your game to thrive in the cloud, check out "The Role of the Cloud Architect" on the OTN ArchBeat Podcast. Decisions, Decisions: The art, science, and politics of technology selection "When the time comes for a solution architect to make the final decision about the technologies, standards, and other elements that are to be incorporated into a particular project, what factors weigh most heavily on that decision? It comes as no surprise that among the architects I contacted, business needs top the list." Managing Oracle Exalogic Elastic Cloud with Oracle Enterprise Manager Ops Center Anand Akela's byline is on this post, but "Dr. Jürgen Fleischer, Oracle Enterprise Manager Ops Center Engineering" appears at the end of the post, so it's anybody's guess as to who wrote this thing. But the content includes a complete listing of the Exalogic 2.0.1 Tea Break Snippets series written by a member of the Exalogic team who goes by the name "The Old Toxophilist." So maybe the best thing to do here is ignore the names and focus on the very useful conent. Boost your infrastructure with Coherence into the Cloud | Nino Guarnacci Nino Guarnacci describes a use case that involved managing a variety of data caches that process complex queries and parallel computational operations, in order to maintain the caches in a consistent state on different server instances. Thought for the Day "No one hates software more than software developers." — Jeff Atwood Source: SoftwareQuotes

    Read the article

  • Windows Phone appointment task

    - by Dennis Vroegop
    Originally posted on: http://geekswithblogs.net/dvroegop/archive/2014/08/10/windows-phone-appointment-task.aspxI am currently working on a new version of my AgeInDays app for Windows Phone. This app calculates how old you are in days (or weeks, depending on your preferences). The inspiration for this app came from my father, who once told me he proposed to my mother when she was 1000 weeks old. That left me wondering: how old in weeks or days am I? And being the geek I am, I wrote an app for it. If you have a Windows Phone, you can find it at http://www.windowsphone.com/en-in/store/app/age-in-days/7ed03603-0e00-4214-ad04-ce56773e5dab A new version of the app was published quite quickly, adding the possibility to mark a date in your agenda when you would have reached a certain age. Of course the logic behind this if extremely simple. Just take a DateTime, populate it with the given date from the DatePicker, then call AddDays(numDays) and voila, you have the date. Now all I had to do was implement a way to store this in the users calendar so he would get a reminder when that date occurred. Luckily, the Windows Phone SDK makes that extremely simple: public void PublishTask(DateTime occuranceDate, string message) { var task = new SaveAppointmentTask() { StartTime = occuranceDate, EndTime = occuranceDate, Subject = message, Location = string.Empty, IsAllDayEvent = true, Reminder = Reminder.None, AppointmentStatus = AppointmentStatus.Free };   task.Show(); }  And that's it. Whenever I call the PublishTask Method an appointment will be made and put in the calendar. Well, not exactly: a template will be made for that appointment and the user will see that template, giving him the option to either discard or save the reminder. The user can also make changes before submitting this to the calendar: it would be useful to be able to change the text in the agenda and that's exactly what this allows you to do. Now, see at the bottom of the screen the option "Occurs". This tiny field is what this post is about. You cannot set it from the code. I want to be able to have repeating items in my agenda. Say for instance you're counting down to a certain date, I want to be able to give you that option as well. However, I cannot. The field "occurs" is not part of the Task you create in code. Of course, you could create a whole series of events yourself. Have the "Occurs" field in your own user interface and make all the appointments. But that's not the same. First, the system doesn't recognize them as part of a series. That means if you want to change the text later on on one of the occurrences it will not ask you if you want to open this one or the whole series. More important however, is that the user has to acknowledge each and every single occurrence and save that into the agenda. Now, I understand why they implemented the system in such a way that the user has to approve an entry. You don't want apps to automatically fill your agenda with messages such as "Remember to pay for my app!". But why not include the "Occurs" option? The user can still opt out if they see this happening. I hope an update will fix this soon. But for now: you just have to countdown to your birthday yourself. My app won't support this.

    Read the article

  • Wipe, Delete, and Securely Destroy Your Hard Drive’s Data the Easy Way

    - by The Geek
    Giving a computer to somebody else? Maybe you’re putting it out on Craigslist to sell to a stranger—either way, you’ll want to make sure that your drive is completely wiped, scrubbed, and clean of any personal data. Here’s the easy way to do it. If you only have access to an Ubuntu Live CD or thumb drive, you can actually use that instead if you prefer, and we’ve got you covered with a full guide to securely wiping your PC’s hard drive. Otherwise, keep reading. Wipe the Drive with DBAN Darik’s Boot and Nuke CD is the easiest way to permanently and totally destroy every bit of personal information on that drive—nobody is going to recover a thing once this is done. The first thing you’ll need to do is download a copy of the ISO image, and then burn it to a blank CD with something really useful like Imgburn. Just choose Burn image to Disc at the start screen, select the little file icon, grab the downloaded ISO, and then go. If you need a little more help, we’ve got you covered with a beginner’s guide to burning an ISO image. Once you’re done, stick the disc into the drive, start the PC up, and then once you boot to the DBAN prompt you’ll see a menu. You can pretty much ignore everything on here, and just type… autonuke And there you are, your disk is now being securely wiped. Once it’s all done, you can remove the CD, and then either pack the PC up to sell, or re-install Windows on there if you feel like it. More Advanced Method If you’re really paranoid, want to run a different type of wipe, or just like fiddling with the options, you can choose F3 or hit Enter at the prompt to head to the advanced selection screen. Here you can choose exactly which drive to wipe, or hit the M key to change the method. You’ll be able to choose between a bunch of different wipe options. The Quick Erase is all you really need though.   So there you are, easy PC wiping in one package. What about you? Do you make sure to wipe your old PCs before giving them away? Personally I’ve always just yanked out the hard drives before I got rid of an old PC, but that’s just me. Download DBAN from dban.org Similar Articles Productive Geek Tips Use an Ubuntu Live CD to Securely Wipe Your PC’s Hard DriveHow to Dispose of Old Computers ResponsiblyHow To Delete a VHD in Windows 7Speed up External USB Hard Drives in Windows VistaSpeed Up SATA Hard Drives in Windows Vista TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • How John Got 15x Improvement Without Really Trying

    - by rchrd
    The following article was published on a Sun Microsystems website a number of years ago by John Feo. It is still useful and worth preserving. So I'm republishing it here.  How I Got 15x Improvement Without Really Trying John Feo, Sun Microsystems Taking ten "personal" program codes used in scientific and engineering research, the author was able to get from 2 to 15 times performance improvement easily by applying some simple general optimization techniques. Introduction Scientific research based on computer simulation depends on the simulation for advancement. The research can advance only as fast as the computational codes can execute. The codes' efficiency determines both the rate and quality of results. In the same amount of time, a faster program can generate more results and can carry out a more detailed simulation of physical phenomena than a slower program. Highly optimized programs help science advance quickly and insure that monies supporting scientific research are used as effectively as possible. Scientific computer codes divide into three broad categories: ISV, community, and personal. ISV codes are large, mature production codes developed and sold commercially. The codes improve slowly over time both in methods and capabilities, and they are well tuned for most vendor platforms. Since the codes are mature and complex, there are few opportunities to improve their performance solely through code optimization. Improvements of 10% to 15% are typical. Examples of ISV codes are DYNA3D, Gaussian, and Nastran. Community codes are non-commercial production codes used by a particular research field. Generally, they are developed and distributed by a single academic or research institution with assistance from the community. Most users just run the codes, but some develop new methods and extensions that feed back into the general release. The codes are available on most vendor platforms. Since these codes are younger than ISV codes, there are more opportunities to optimize the source code. Improvements of 50% are not unusual. Examples of community codes are AMBER, CHARM, BLAST, and FASTA. Personal codes are those written by single users or small research groups for their own use. These codes are not distributed, but may be passed from professor-to-student or student-to-student over several years. They form the primordial ocean of applications from which community and ISV codes emerge. Government research grants pay for the development of most personal codes. This paper reports on the nature and performance of this class of codes. Over the last year, I have looked at over two dozen personal codes from more than a dozen research institutions. The codes cover a variety of scientific fields, including astronomy, atmospheric sciences, bioinformatics, biology, chemistry, geology, and physics. The sources range from a few hundred lines to more than ten thousand lines, and are written in Fortran, Fortran 90, C, and C++. For the most part, the codes are modular, documented, and written in a clear, straightforward manner. They do not use complex language features, advanced data structures, programming tricks, or libraries. I had little trouble understanding what the codes did or how data structures were used. Most came with a makefile. Surprisingly, only one of the applications is parallel. All developers have access to parallel machines, so availability is not an issue. Several tried to parallelize their applications, but stopped after encountering difficulties. Lack of education and a perception that parallelism is difficult prevented most from trying. I parallelized several of the codes using OpenMP, and did not judge any of the codes as difficult to parallelize. Even more surprising than the lack of parallelism is the inefficiency of the codes. I was able to get large improvements in performance in a matter of a few days applying simple optimization techniques. Table 1 lists ten representative codes [names and affiliation are omitted to preserve anonymity]. Improvements on one processor range from 2x to 15.5x with a simple average of 4.75x. I did not use sophisticated performance tools or drill deep into the program's execution character as one would do when tuning ISV or community codes. Using only a profiler and source line timers, I identified inefficient sections of code and improved their performance by inspection. The changes were at a high level. I am sure there is another factor of 2 or 3 in each code, and more if the codes are parallelized. The study’s results show that personal scientific codes are running many times slower than they should and that the problem is pervasive. Computational scientists are not sloppy programmers; however, few are trained in the art of computer programming or code optimization. I found that most have a working knowledge of some programming language and standard software engineering practices; but they do not know, or think about, how to make their programs run faster. They simply do not know the standard techniques used to make codes run faster. In fact, they do not even perceive that such techniques exist. The case studies described in this paper show that applying simple, well known techniques can significantly increase the performance of personal codes. It is important that the scientific community and the Government agencies that support scientific research find ways to better educate academic scientific programmers. The inefficiency of their codes is so bad that it is retarding both the quality and progress of scientific research. # cacheperformance redundantoperations loopstructures performanceimprovement 1 x x 15.5 2 x 2.8 3 x x 2.5 4 x 2.1 5 x x 2.0 6 x 5.0 7 x 5.8 8 x 6.3 9 2.2 10 x x 3.3 Table 1 — Area of improvement and performance gains of 10 codes The remainder of the paper is organized as follows: sections 2, 3, and 4 discuss the three most common sources of inefficiencies in the codes studied. These are cache performance, redundant operations, and loop structures. Each section includes several examples. The last section summaries the work and suggests a possible solution to the issues raised. Optimizing cache performance Commodity microprocessor systems use caches to increase memory bandwidth and reduce memory latencies. Typical latencies from processor to L1, L2, local, and remote memory are 3, 10, 50, and 200 cycles, respectively. Moreover, bandwidth falls off dramatically as memory distances increase. Programs that do not use cache effectively run many times slower than programs that do. When optimizing for cache, the biggest performance gains are achieved by accessing data in cache order and reusing data to amortize the overhead of cache misses. Secondary considerations are prefetching, associativity, and replacement; however, the understanding and analysis required to optimize for the latter are probably beyond the capabilities of the non-expert. Much can be gained simply by accessing data in the correct order and maximizing data reuse. 6 out of the 10 codes studied here benefited from such high level optimizations. Array Accesses The most important cache optimization is the most basic: accessing Fortran array elements in column order and C array elements in row order. Four of the ten codes—1, 2, 4, and 10—got it wrong. Compilers will restructure nested loops to optimize cache performance, but may not do so if the loop structure is too complex, or the loop body includes conditionals, complex addressing, or function calls. In code 1, the compiler failed to invert a key loop because of complex addressing do I = 0, 1010, delta_x IM = I - delta_x IP = I + delta_x do J = 5, 995, delta_x JM = J - delta_x JP = J + delta_x T1 = CA1(IP, J) + CA1(I, JP) T2 = CA1(IM, J) + CA1(I, JM) S1 = T1 + T2 - 4 * CA1(I, J) CA(I, J) = CA1(I, J) + D * S1 end do end do In code 2, the culprit is conditionals do I = 1, N do J = 1, N If (IFLAG(I,J) .EQ. 0) then T1 = Value(I, J-1) T2 = Value(I-1, J) T3 = Value(I, J) T4 = Value(I+1, J) T5 = Value(I, J+1) Value(I,J) = 0.25 * (T1 + T2 + T5 + T4) Delta = ABS(T3 - Value(I,J)) If (Delta .GT. MaxDelta) MaxDelta = Delta endif enddo enddo I fixed both programs by inverting the loops by hand. Code 10 has three-dimensional arrays and triply nested loops. The structure of the most computationally intensive loops is too complex to invert automatically or by hand. The only practical solution is to transpose the arrays so that the dimension accessed by the innermost loop is in cache order. The arrays can be transposed at construction or prior to entering a computationally intensive section of code. The former requires all array references to be modified, while the latter is cost effective only if the cost of the transpose is amortized over many accesses. I used the second approach to optimize code 10. Code 5 has four-dimensional arrays and loops are nested four deep. For all of the reasons cited above the compiler is not able to restructure three key loops. Assume C arrays and let the four dimensions of the arrays be i, j, k, and l. In the original code, the index structure of the three loops is L1: for i L2: for i L3: for i for l for l for j for k for j for k for j for k for l So only L3 accesses array elements in cache order. L1 is a very complex loop—much too complex to invert. I brought the loop into cache alignment by transposing the second and fourth dimensions of the arrays. Since the code uses a macro to compute all array indexes, I effected the transpose at construction and changed the macro appropriately. The dimensions of the new arrays are now: i, l, k, and j. L3 is a simple loop and easily inverted. L2 has a loop-carried scalar dependence in k. By promoting the scalar name that carries the dependence to an array, I was able to invert the third and fourth subloops aligning the loop with cache. Code 5 is by far the most difficult of the four codes to optimize for array accesses; but the knowledge required to fix the problems is no more than that required for the other codes. I would judge this code at the limits of, but not beyond, the capabilities of appropriately trained computational scientists. Array Strides When a cache miss occurs, a line (64 bytes) rather than just one word is loaded into the cache. If data is accessed stride 1, than the cost of the miss is amortized over 8 words. Any stride other than one reduces the cost savings. Two of the ten codes studied suffered from non-unit strides. The codes represent two important classes of "strided" codes. Code 1 employs a multi-grid algorithm to reduce time to convergence. The grids are every tenth, fifth, second, and unit element. Since time to convergence is inversely proportional to the distance between elements, coarse grids converge quickly providing good starting values for finer grids. The better starting values further reduce the time to convergence. The downside is that grids of every nth element, n > 1, introduce non-unit strides into the computation. In the original code, much of the savings of the multi-grid algorithm were lost due to this problem. I eliminated the problem by compressing (copying) coarse grids into continuous memory, and rewriting the computation as a function of the compressed grid. On convergence, I copied the final values of the compressed grid back to the original grid. The savings gained from unit stride access of the compressed grid more than paid for the cost of copying. Using compressed grids, the loop from code 1 included in the previous section becomes do j = 1, GZ do i = 1, GZ T1 = CA(i+0, j-1) + CA(i-1, j+0) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) S1 = T1 + T4 - 4 * CA1(i+0, j+0) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 enddo enddo where CA and CA1 are compressed arrays of size GZ. Code 7 traverses a list of objects selecting objects for later processing. The labels of the selected objects are stored in an array. The selection step has unit stride, but the processing steps have irregular stride. A fix is to save the parameters of the selected objects in temporary arrays as they are selected, and pass the temporary arrays to the processing functions. The fix is practical if the same parameters are used in selection as in processing, or if processing comprises a series of distinct steps which use overlapping subsets of the parameters. Both conditions are true for code 7, so I achieved significant improvement by copying parameters to temporary arrays during selection. Data reuse In the previous sections, we optimized for spatial locality. It is also important to optimize for temporal locality. Once read, a datum should be used as much as possible before it is forced from cache. Loop fusion and loop unrolling are two techniques that increase temporal locality. Unfortunately, both techniques increase register pressure—as loop bodies become larger, the number of registers required to hold temporary values grows. Once register spilling occurs, any gains evaporate quickly. For multiprocessors with small register sets or small caches, the sweet spot can be very small. In the ten codes presented here, I found no opportunities for loop fusion and only two opportunities for loop unrolling (codes 1 and 3). In code 1, unrolling the outer and inner loop one iteration increases the number of result values computed by the loop body from 1 to 4, do J = 1, GZ-2, 2 do I = 1, GZ-2, 2 T1 = CA1(i+0, j-1) + CA1(i-1, j+0) T2 = CA1(i+1, j-1) + CA1(i+0, j+0) T3 = CA1(i+0, j+0) + CA1(i-1, j+1) T4 = CA1(i+1, j+0) + CA1(i+0, j+1) T5 = CA1(i+2, j+0) + CA1(i+1, j+1) T6 = CA1(i+1, j+1) + CA1(i+0, j+2) T7 = CA1(i+2, j+1) + CA1(i+1, j+2) S1 = T1 + T4 - 4 * CA1(i+0, j+0) S2 = T2 + T5 - 4 * CA1(i+1, j+0) S3 = T3 + T6 - 4 * CA1(i+0, j+1) S4 = T4 + T7 - 4 * CA1(i+1, j+1) CA(i+0, j+0) = CA1(i+0, j+0) + DD * S1 CA(i+1, j+0) = CA1(i+1, j+0) + DD * S2 CA(i+0, j+1) = CA1(i+0, j+1) + DD * S3 CA(i+1, j+1) = CA1(i+1, j+1) + DD * S4 enddo enddo The loop body executes 12 reads, whereas as the rolled loop shown in the previous section executes 20 reads to compute the same four values. In code 3, two loops are unrolled 8 times and one loop is unrolled 4 times. Here is the before for (k = 0; k < NK[u]; k++) { sum = 0.0; for (y = 0; y < NY; y++) { sum += W[y][u][k] * delta[y]; } backprop[i++]=sum; } and after code for (k = 0; k < KK - 8; k+=8) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (y = 0; y < NY; y++) { sum0 += W[y][0][k+0] * delta[y]; sum1 += W[y][0][k+1] * delta[y]; sum2 += W[y][0][k+2] * delta[y]; sum3 += W[y][0][k+3] * delta[y]; sum4 += W[y][0][k+4] * delta[y]; sum5 += W[y][0][k+5] * delta[y]; sum6 += W[y][0][k+6] * delta[y]; sum7 += W[y][0][k+7] * delta[y]; } backprop[k+0] = sum0; backprop[k+1] = sum1; backprop[k+2] = sum2; backprop[k+3] = sum3; backprop[k+4] = sum4; backprop[k+5] = sum5; backprop[k+6] = sum6; backprop[k+7] = sum7; } for one of the loops unrolled 8 times. Optimizing for temporal locality is the most difficult optimization considered in this paper. The concepts are not difficult, but the sweet spot is small. Identifying where the program can benefit from loop unrolling or loop fusion is not trivial. Moreover, it takes some effort to get it right. Still, educating scientific programmers about temporal locality and teaching them how to optimize for it will pay dividends. Reducing instruction count Execution time is a function of instruction count. Reduce the count and you usually reduce the time. The best solution is to use a more efficient algorithm; that is, an algorithm whose order of complexity is smaller, that converges quicker, or is more accurate. Optimizing source code without changing the algorithm yields smaller, but still significant, gains. This paper considers only the latter because the intent is to study how much better codes can run if written by programmers schooled in basic code optimization techniques. The ten codes studied benefited from three types of "instruction reducing" optimizations. The two most prevalent were hoisting invariant memory and data operations out of inner loops. The third was eliminating unnecessary data copying. The nature of these inefficiencies is language dependent. Memory operations The semantics of C make it difficult for the compiler to determine all the invariant memory operations in a loop. The problem is particularly acute for loops in functions since the compiler may not know the values of the function's parameters at every call site when compiling the function. Most compilers support pragmas to help resolve ambiguities; however, these pragmas are not comprehensive and there is no standard syntax. To guarantee that invariant memory operations are not executed repetitively, the user has little choice but to hoist the operations by hand. The problem is not as severe in Fortran programs because in the absence of equivalence statements, it is a violation of the language's semantics for two names to share memory. Codes 3 and 5 are C programs. In both cases, the compiler did not hoist all invariant memory operations from inner loops. Consider the following loop from code 3 for (y = 0; y < NY; y++) { i = 0; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += delta[y] * I1[i++]; } } } Since dW[y][u] can point to the same memory space as delta for one or more values of y and u, assignment to dW[y][u][k] may change the value of delta[y]. In reality, dW and delta do not overlap in memory, so I rewrote the loop as for (y = 0; y < NY; y++) { i = 0; Dy = delta[y]; for (u = 0; u < NU; u++) { for (k = 0; k < NK[u]; k++) { dW[y][u][k] += Dy * I1[i++]; } } } Failure to hoist invariant memory operations may be due to complex address calculations. If the compiler can not determine that the address calculation is invariant, then it can hoist neither the calculation nor the associated memory operations. As noted above, code 5 uses a macro to address four-dimensional arrays #define MAT4D(a,q,i,j,k) (double *)((a)->data + (q)*(a)->strides[0] + (i)*(a)->strides[3] + (j)*(a)->strides[2] + (k)*(a)->strides[1]) The macro is too complex for the compiler to understand and so, it does not identify any subexpressions as loop invariant. The simplest way to eliminate the address calculation from the innermost loop (over i) is to define a0 = MAT4D(a,q,0,j,k) before the loop and then replace all instances of *MAT4D(a,q,i,j,k) in the loop with a0[i] A similar problem appears in code 6, a Fortran program. The key loop in this program is do n1 = 1, nh nx1 = (n1 - 1) / nz + 1 nz1 = n1 - nz * (nx1 - 1) do n2 = 1, nh nx2 = (n2 - 1) / nz + 1 nz2 = n2 - nz * (nx2 - 1) ndx = nx2 - nx1 ndy = nz2 - nz1 gxx = grn(1,ndx,ndy) gyy = grn(2,ndx,ndy) gxy = grn(3,ndx,ndy) balance(n1,1) = balance(n1,1) + (force(n2,1) * gxx + force(n2,2) * gxy) * h1 balance(n1,2) = balance(n1,2) + (force(n2,1) * gxy + force(n2,2) * gyy)*h1 end do end do The programmer has written this loop well—there are no loop invariant operations with respect to n1 and n2. However, the loop resides within an iterative loop over time and the index calculations are independent with respect to time. Trading space for time, I precomputed the index values prior to the entering the time loop and stored the values in two arrays. I then replaced the index calculations with reads of the arrays. Data operations Ways to reduce data operations can appear in many forms. Implementing a more efficient algorithm produces the biggest gains. The closest I came to an algorithm change was in code 4. This code computes the inner product of K-vectors A(i) and B(j), 0 = i < N, 0 = j < M, for most values of i and j. Since the program computes most of the NM possible inner products, it is more efficient to compute all the inner products in one triply-nested loop rather than one at a time when needed. The savings accrue from reading A(i) once for all B(j) vectors and from loop unrolling. for (i = 0; i < N; i+=8) { for (j = 0; j < M; j++) { sum0 = 0.0; sum1 = 0.0; sum2 = 0.0; sum3 = 0.0; sum4 = 0.0; sum5 = 0.0; sum6 = 0.0; sum7 = 0.0; for (k = 0; k < K; k++) { sum0 += A[i+0][k] * B[j][k]; sum1 += A[i+1][k] * B[j][k]; sum2 += A[i+2][k] * B[j][k]; sum3 += A[i+3][k] * B[j][k]; sum4 += A[i+4][k] * B[j][k]; sum5 += A[i+5][k] * B[j][k]; sum6 += A[i+6][k] * B[j][k]; sum7 += A[i+7][k] * B[j][k]; } C[i+0][j] = sum0; C[i+1][j] = sum1; C[i+2][j] = sum2; C[i+3][j] = sum3; C[i+4][j] = sum4; C[i+5][j] = sum5; C[i+6][j] = sum6; C[i+7][j] = sum7; }} This change requires knowledge of a typical run; i.e., that most inner products are computed. The reasons for the change, however, derive from basic optimization concepts. It is the type of change easily made at development time by a knowledgeable programmer. In code 5, we have the data version of the index optimization in code 6. Here a very expensive computation is a function of the loop indices and so cannot be hoisted out of the loop; however, the computation is invariant with respect to an outer iterative loop over time. We can compute its value for each iteration of the computation loop prior to entering the time loop and save the values in an array. The increase in memory required to store the values is small in comparison to the large savings in time. The main loop in Code 8 is doubly nested. The inner loop includes a series of guarded computations; some are a function of the inner loop index but not the outer loop index while others are a function of the outer loop index but not the inner loop index for (j = 0; j < N; j++) { for (i = 0; i < M; i++) { r = i * hrmax; R = A[j]; temp = (PRM[3] == 0.0) ? 1.0 : pow(r, PRM[3]); high = temp * kcoeff * B[j] * PRM[2] * PRM[4]; low = high * PRM[6] * PRM[6] / (1.0 + pow(PRM[4] * PRM[6], 2.0)); kap = (R > PRM[6]) ? high * R * R / (1.0 + pow(PRM[4]*r, 2.0) : low * pow(R/PRM[6], PRM[5]); < rest of loop omitted > }} Note that the value of temp is invariant to j. Thus, we can hoist the computation for temp out of the loop and save its values in an array. for (i = 0; i < M; i++) { r = i * hrmax; TEMP[i] = pow(r, PRM[3]); } [N.B. – the case for PRM[3] = 0 is omitted and will be reintroduced later.] We now hoist out of the inner loop the computations invariant to i. Since the conditional guarding the value of kap is invariant to i, it behooves us to hoist the computation out of the inner loop, thereby executing the guard once rather than M times. The final version of the code is for (j = 0; j < N; j++) { R = rig[j] / 1000.; tmp1 = kcoeff * par[2] * beta[j] * par[4]; tmp2 = 1.0 + (par[4] * par[4] * par[6] * par[6]); tmp3 = 1.0 + (par[4] * par[4] * R * R); tmp4 = par[6] * par[6] / tmp2; tmp5 = R * R / tmp3; tmp6 = pow(R / par[6], par[5]); if ((par[3] == 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp5; } else if ((par[3] == 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * tmp4 * tmp6; } else if ((par[3] != 0.0) && (R > par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp5; } else if ((par[3] != 0.0) && (R <= par[6])) { for (i = 1; i <= imax1; i++) KAP[i] = tmp1 * TEMP[i] * tmp4 * tmp6; } for (i = 0; i < M; i++) { kap = KAP[i]; r = i * hrmax; < rest of loop omitted > } } Maybe not the prettiest piece of code, but certainly much more efficient than the original loop, Copy operations Several programs unnecessarily copy data from one data structure to another. This problem occurs in both Fortran and C programs, although it manifests itself differently in the two languages. Code 1 declares two arrays—one for old values and one for new values. At the end of each iteration, the array of new values is copied to the array of old values to reset the data structures for the next iteration. This problem occurs in Fortran programs not included in this study and in both Fortran 77 and Fortran 90 code. Introducing pointers to the arrays and swapping pointer values is an obvious way to eliminate the copying; but pointers is not a feature that many Fortran programmers know well or are comfortable using. An easy solution not involving pointers is to extend the dimension of the value array by 1 and use the last dimension to differentiate between arrays at different times. For example, if the data space is N x N, declare the array (N, N, 2). Then store the problem’s initial values in (_, _, 2) and define the scalar names new = 2 and old = 1. At the start of each iteration, swap old and new to reset the arrays. The old–new copy problem did not appear in any C program. In programs that had new and old values, the code swapped pointers to reset data structures. Where unnecessary coping did occur is in structure assignment and parameter passing. Structures in C are handled much like scalars. Assignment causes the data space of the right-hand name to be copied to the data space of the left-hand name. Similarly, when a structure is passed to a function, the data space of the actual parameter is copied to the data space of the formal parameter. If the structure is large and the assignment or function call is in an inner loop, then copying costs can grow quite large. While none of the ten programs considered here manifested this problem, it did occur in programs not included in the study. A simple fix is always to refer to structures via pointers. Optimizing loop structures Since scientific programs spend almost all their time in loops, efficient loops are the key to good performance. Conditionals, function calls, little instruction level parallelism, and large numbers of temporary values make it difficult for the compiler to generate tightly packed, highly efficient code. Conditionals and function calls introduce jumps that disrupt code flow. Users should eliminate or isolate conditionls to their own loops as much as possible. Often logical expressions can be substituted for if-then-else statements. For example, code 2 includes the following snippet MaxDelta = 0.0 do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) if (Delta > MaxDelta) MaxDelta = Delta enddo enddo if (MaxDelta .gt. 0.001) goto 200 Since the only use of MaxDelta is to control the jump to 200 and all that matters is whether or not it is greater than 0.001, I made MaxDelta a boolean and rewrote the snippet as MaxDelta = .false. do J = 1, N do I = 1, M < code omitted > Delta = abs(OldValue ? NewValue) MaxDelta = MaxDelta .or. (Delta .gt. 0.001) enddo enddo if (MaxDelta) goto 200 thereby, eliminating the conditional expression from the inner loop. A microprocessor can execute many instructions per instruction cycle. Typically, it can execute one or more memory, floating point, integer, and jump operations. To be executed simultaneously, the operations must be independent. Thick loops tend to have more instruction level parallelism than thin loops. Moreover, they reduce memory traffice by maximizing data reuse. Loop unrolling and loop fusion are two techniques to increase the size of loop bodies. Several of the codes studied benefitted from loop unrolling, but none benefitted from loop fusion. This observation is not too surpising since it is the general tendency of programmers to write thick loops. As loops become thicker, the number of temporary values grows, increasing register pressure. If registers spill, then memory traffic increases and code flow is disrupted. A thick loop with many temporary values may execute slower than an equivalent series of thin loops. The biggest gain will be achieved if the thick loop can be split into a series of independent loops eliminating the need to write and read temporary arrays. I found such an occasion in code 10 where I split the loop do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do into two disjoint loops do i = 1, n do j = 1, m A24(j,i)= S24(j,i) * T24(j,i) + S25(j,i) * U25(j,i) B24(j,i)= S24(j,i) * T25(j,i) + S25(j,i) * U24(j,i) A25(j,i)= S24(j,i) * C24(j,i) + S25(j,i) * V24(j,i) B25(j,i)= S24(j,i) * U25(j,i) + S25(j,i) * V25(j,i) end do end do do i = 1, n do j = 1, m C24(j,i)= S26(j,i) * T26(j,i) + S27(j,i) * U26(j,i) D24(j,i)= S26(j,i) * T27(j,i) + S27(j,i) * V26(j,i) C25(j,i)= S27(j,i) * S28(j,i) + S26(j,i) * U28(j,i) D25(j,i)= S27(j,i) * T28(j,i) + S26(j,i) * V28(j,i) end do end do Conclusions Over the course of the last year, I have had the opportunity to work with over two dozen academic scientific programmers at leading research universities. Their research interests span a broad range of scientific fields. Except for two programs that relied almost exclusively on library routines (matrix multiply and fast Fourier transform), I was able to improve significantly the single processor performance of all codes. Improvements range from 2x to 15.5x with a simple average of 4.75x. Changes to the source code were at a very high level. I did not use sophisticated techniques or programming tools to discover inefficiencies or effect the changes. Only one code was parallel despite the availability of parallel systems to all developers. Clearly, we have a problem—personal scientific research codes are highly inefficient and not running parallel. The developers are unaware of simple optimization techniques to make programs run faster. They lack education in the art of code optimization and parallel programming. I do not believe we can fix the problem by publishing additional books or training manuals. To date, the developers in questions have not studied the books or manual available, and are unlikely to do so in the future. Short courses are a possible solution, but I believe they are too concentrated to be much use. The general concepts can be taught in a three or four day course, but that is not enough time for students to practice what they learn and acquire the experience to apply and extend the concepts to their codes. Practice is the key to becoming proficient at optimization. I recommend that graduate students be required to take a semester length course in optimization and parallel programming. We would never give someone access to state-of-the-art scientific equipment costing hundreds of thousands of dollars without first requiring them to demonstrate that they know how to use the equipment. Yet the criterion for time on state-of-the-art supercomputers is at most an interesting project. Requestors are never asked to demonstrate that they know how to use the system, or can use the system effectively. A semester course would teach them the required skills. Government agencies that fund academic scientific research pay for most of the computer systems supporting scientific research as well as the development of most personal scientific codes. These agencies should require graduate schools to offer a course in optimization and parallel programming as a requirement for funding. About the Author John Feo received his Ph.D. in Computer Science from The University of Texas at Austin in 1986. After graduate school, Dr. Feo worked at Lawrence Livermore National Laboratory where he was the Group Leader of the Computer Research Group and principal investigator of the Sisal Language Project. In 1997, Dr. Feo joined Tera Computer Company where he was project manager for the MTA, and oversaw the programming and evaluation of the MTA at the San Diego Supercomputer Center. In 2000, Dr. Feo joined Sun Microsystems as an HPC application specialist. He works with university research groups to optimize and parallelize scientific codes. Dr. Feo has published over two dozen research articles in the areas of parallel parallel programming, parallel programming languages, and application performance.

    Read the article

  • Non-standard installation (installing Linux from Linux)

    - by Evan Plaice
    So, here's my setup. I have one partition with the newest version installed, a second partition with an older version installed (as a backup just in case), a swap partition that both share, and a boot partition so the bootloader doesn't need to be setup after each upgrade. Partitions: sda1 ext3 /boot sda2 ext4 / (current version) sda3 ext4 / (old version) sda4 swap /swap sda5 ntfs (contains folders symbolically linked to /home on /) So far it has been a very good setup. I can create new boot loaders without screwing it up and adding my personal files into a new install is as simple as creating some symbolic links (the partition is NTFS in case I need to load windows on the system again). Here's the issue. I'd like to be able to drop the install into /distro on the current version and install a new version on / on the old version effectively replacing/upgrading it. The goal is to be able to just swap out new versions as they are released while maintaining redundancy in case I don't like th update. So far I have: downloaded the install.iso created a folder in /distro copied the install.iso into /distro extracted vmlinuz and initrd.lz into /distro Then I modified /boot/grub/menu.lst with the following entry: title Install Linux root (hd0,1) kernel /distro/vmlinuz initrd /distro/initrd.lz vmlinuz loads perfectly but it says it can't find initrd.lz on boot. I have also tried to uncompress the image with: unlzma < initrd.lz > initrd.img And, updating the menu.lst file to match; but that doesn't work either. I'm assuming that vmlinuz (linux kernel) loads, fires up the virtual filesystem by creating a ramdisk (initrd), mounts the iso, and launches the installer. Am I missing something here? Update: First, I wanted to say that the accepted answer would have been the best option if I was doing a normal Ubuntu install. Unfortunately, I was installing Linux Mint (which lacks the script needed to make debootstrap work. So the problem I with the above approach was, I was missing the command that vmlinuz (linux kernel) needed to execute to start boot into LiveCD mode. By looking in the /boot/grub/grub.cfg file I found what I was missing. Although this method will work, it requires that the installation files reside on their own partition. I took the easy route and used unetbootin to drop the LiveCD on a usb drive and booted from that. Like I said before. Debootstrap would have been the ideal solution here. Even though I couldn't use it I wrote down the steps it would've taken to use it. Step One: Format sda3 (the partition with the old copy of linux that's being overwritten) I used gparted to format it as ext4 from within the current linux install. How this is done varies based on what tools you prefer to use. Step Two: Mount the newly formatted partition (we'll call the mount ubuntu for simplicity) sudo mkdir /mnt/ubuntu sudo mount -o -loop /dev/sda3 /mnt/ubuntu Step Three: Get debootstrap sudo apt-get install debootstrap Step Four: Mount the install disk (replace ubuntu.iso with the name if your install disk) sudo mkdir /media/cdrom sudo mount -o loop ~/ubuntu.iso /media/cdrom Step Five: Install the OS using debootstrap (replace fiesty with the version you're installing and amd64 with your processor's architecture) sudo debootstrap --arch amd64 fiesty /mnt/ubuntu file:/media/cdrom The settings here varies. While I loaded debootstrap using an install iso, you can also have debootstrap automatically download and install if with a repository link (While most of these repositories contain debian versions I'm still not clear as to whether Ubuntu has similar repositories). Here a list of the debian package repositories and their mirrors. This is how you'd deploy debootstrap if you were doing it directly from a repository: sudo debootstrap --arch amd64 squeeze /mnt/debian http://ftp.us.debian.org/debian Here's the link that I primarily used to figure this out.

    Read the article

  • MacGyver Moments

    - by Geoff N. Hiten
    Denny Cherry tagged me to write about my best MacGyver Moment.  Usually I ignore blogosphere fluff and just use this space to write what I think is important.  However, #MVP10 just ended and I have a stronger sense of community.  Besides, where else would I mention my second best Macgyver moment was making a BIOS jumper out of a soda can.  Aluminum is conductive and I didn't have any real jumpers lying around. My best moment is probably my entire home computer network.  Every system but one is hand-built, usually cobbled together out of spare parts and 'adapted' from its original purpose. My Primary Domain Controller is a Dell 2300.   The Service Tag indicates it was shipped to the original owner in 1999.  Box has a PERC/1 RAID controller.  I acquired this from a previous employer for $50.  It runs Windows Server 2003 Enterprise Edition.  Does DNS, DHCP, and RADIUS services as a bonus.  RADIUS authentication is used for VPN and Wireless access.  It is nice to sign in once and be done with it. The Secondary Domain Controller is an old desktop.  Dual P-III 933 with some extra drives. My VPN box is a P-II 250 with 384MB of RAM and a 21 GB hard drive.  I did a P-to-V to my Hyper-V box a year or so ago and retired the hardware again.  Dynamic DNS lets me connect no matter how often Comcast shuffles my IP. The Hyper-V box is a desktop system with 8GB RAM and an AMD Athlon 5000+ processor.  Cost me less than $500 to put together nearly two years ago.  I reasoned that if Vista and Windows 2008 were the same code then Vista 64-bit certified meant the drivers for Vista would load into Windows 2008.  Turns out I was right. Later I added three 1TB drives but wasn't too happy with how that turned out.  I recovered two of the drives yesterday and am building an iSCSI storage unit. (Much thanks to Starwind.  Great product).  I am using an old AMD 1.1GhZ box with 1.5 GB RAM (cobbled together from three old PCs) as my storave server.  The Hyper-V box is slated for an OS rebuild to 2008 R2 once I get the storage system worked out.  maybe in a week or two. A couple of DLink Gigabit switches ties everything together. Add in the Vonage box, the three PCs, the Wireless-N Access Point, the two notebooks and the XBox and you have gone from MacGyver to darn near Rube Goldberg. The only thing I really spend money on is power supplies and fans.  I buy top-of-the-line for both. I even pull and crimp my own cables. Oh, and if my kids hose up a PC, I have all of their data on a server elsewhere.  Every PC and laptop is pretty much interchangable for email and basic workstation tasks.  That helps a lot too. Of course I will tag SQLVariant.

    Read the article

  • Buy iPads In India From eZone, Reliance iStores [Chennai, Bangalore, Delhi, Mumbai]

    - by Gopinath
    Close to an year wait for Apple iPad in India is over. Now everyone can buy a genuine iPad with manufacturers’ warranty from dozens of retail outlets set up by Future Bazar’s eZone and Reliance iStore. This puts an end to the grey market that was importing iPads through illegal channels, selling them at staggering high prices and with no warranty. iPad Retail Price at eZone & Outlet Address The iPad page on eZone’s website has price details of various models and they range from Rs.27,900/- to Rs.44,000/-. iPad 16 GB WiFi  – Rs. 27900.00 iPad 32 GB WiFi  – Rs. 32900.00 iPad 64 GB WiFi  – Rs. 37900.00 iPad 16 GB WiFi  + 3G – Rs. 34900.00 iPad 32 GB WiFi  + 3G – Rs. 39900.00 iPad 64 GB WiFi  + 3G – Rs. 44900.00 Here is the list of eZone stores selling iPads Chennai Stores eZone :: CHENNAI-GANDHI SQUARE Gandhi Square, ( G2),No. 46, Old Mahabalipuram Road, Kandanchavadi, Chennai ( Before Lifeline Hospital) – 600096. Phone : 24967771/7 eZone :: CHENNAI-MYLAPORE Grand Terrace, Old no. 94, new door no. 162, Luz Church Road, Mylapore, Chennai – . Tamil Nadu. Phone : 24987867/68. Mumbai Stores eZone :: MUMBAI-GOREGAON Shop No-S-23, 2nd Floor, Oberoi Mall Off Western Express Highway , Goregaon(E) , Mumbai – 400063, Phone: 28410011/40214771. eZone :: MUMBAI-POWAI-HAIKO MALL Hailko Mall, Level 2, Central Avenue, Hiranandani Garden, Powai, Mumbai, 400076. Phone: 25717355/56. eZone :: EZ-Sobo Central C wing,SOBO Central, Next to Tardoe AC Market, Pandit Madan Mohan Malviya Road, Mumbai – 400034. Phone : 022-30089344. Bangalore Stores eZone :: Koramangala (Bnglr) Regent Insignia, Ground Floor,# 414, 100 Ft Road, Koramangala, Bangalore – 560034 Phone : 080-25520241/242/243. eZone :: BANGALORE-INDIRA NAGAR No.62, Asha Pearl,100 Feet Road, Opp.AXIS Bank.Indiranagar, Bangalore – 560038 Phone : 25216857/6855/6856. eZone :: BANGALORE-PASADENA pasadena’ (Ground floor),18/1.(old number 125/a),10th main,Ashoka pillar road,Jaynagar 1st block,Bangalore – 560 011. Phone : 26577527. Delhi Stores eZone :: NEW DELHI-PUSA ROAD Ground/Lower Ground Floor, Plot # 26, Pusa Road, Adjacent to Karol Bagh Metro Station, Karol Bagh, New Delhi – 110005. Phone :28757040/41. For more details check eZone iPad Product Page iPads at Reliance iStore Reliance iStores are exclusive outlets for selling Apple products in India. All the models of iPad are available at Reliance iStore and the price details are not available on their websites. You may walk into any of the iStore close by your locality or call them to get the details. To locate the stores close by your locality please check store locator page on iStore Website. Do you know any other retail stores selling iPads in India? This article titled,Buy iPads In India From eZone, Reliance iStores [Chennai, Bangalore, Delhi, Mumbai], was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • The Latest News About SAP

    - by jmorourke
    Like many professionals, I get a lot of my news from Google e-mail alerts that I’ve set up to keep track of key industry trends and competitive news.  In the past few weeks, I’ve been getting a number of news alerts about SAP.  Below are a few recent examples: Warm weather cuts short US maple sugaring season – by Toby Talbot, AP MILWAUKEE – Temperatures in Wisconsin had already hit the high 60s when Gretchen Grape and her family began tapping their 850 maple trees. They had waited for the state's ceremonial tapping to kick off the maple sugaring season. It was moved up five days, but that didn't make much difference. For Grape, the typically month-long season ended nine days later. The SAP had stopped flowing in a record-setting heat wave, and the 5-quart collection bags that in a good year fill in a day were still half-empty. Instead of their usual 300 gallons of syrup, her family had about 40. Maple syrup producers across the North have had their season cut short by unusually warm weather. While those with expensive, modern vacuum systems say they've been able to suck a decent amount of sap from their trees, producers like Grape, who still rely on traditional taps and buckets, have seen their year ruined. "It's frustrating," said the 69-year-old retiree from Holcombe, Wis. "You put in the same amount of work, equipment, investment, and then all of a sudden, boom, you have no SAP." Home & Garden: Too-Early Spring Means Sugaring Woes  - by Georgeanne Davis for The Free Press Over this past weekend, forsythia and daffodils were blooming in the southern parts of the state as temperatures climbed to 85 degrees, and trees began budding out, putting an end to this year's maple syrup production even as the state celebrated Maine Maple Sunday. Maple sugaring needs cold nights and warm days to induce SAP flows. Once the trees begin budding, SAP can still flow, but the SAP is bitter and has an off taste. Many farmers and dairymen count on sugaring for extra income, so the abbreviated season is a real financial loss for them, akin to the shortened shrimping season's effect on Maine lobstermen. SAP season comes to a sugary Sunday finale – Kennebec Journal, March 26th, 2012 Rebecca Manthey stood out in the rain at the entrance of Old Fort Western keeping watch over a cast iron kettle of boiling SAP hooked to a tripod over a wood fire.  Manthey and the rest of the Old Fort Western staff -- decked out in 18th-century attire -- joined sugar houses across the state in observance of Maine Maple Sunday. The annual event is sponsored by the Department of Agriculture and the Maine Maple Producers Association.  She said the rain hadn't kept people from coming to enjoy all the events at the fort surrounding the production of Maple syrup.  "In the 18th century, you would be boiling SAP in the woods, so I would be in the woods," Manthey explained to the families who circled around her. "People spent weeks and weeks in the woods. You don't want to cook it to fast or it would burn. When it looks like the right consistency then you send it (into the kitchen) to be made into sugar." Manthey said she enjoyed portraying an 18th-century woman, even in the rain, which didn't seem to bother visitors either. There was a steady stream of families touring the fort and enjoying the maple syrup demonstrations. I hope you enjoy these updates on SAP – Happy April Fool’s Day!

    Read the article

  • The SmartAssembly Rearchitecture

    - by Simon Cooper
    You may have noticed that not a lot has happened to SmartAssembly in the past few months. However, the team has been very busy behind the scenes working on an entirely new version of SmartAssembly. SmartAssembly 6.5 Over the past few releases of SmartAssembly, the team had come to the realisation that the current 'architecture' - grown organically, way before RedGate bought it, from a simple name obfuscator over the years into a full-featured obfuscator and assembly instrumentation tool - was simply not up to the task. Not for what we wanted to do with it at the time, and not what we have planned for the future. Not only was it not up to what we wanted it to do, but it was severely limiting our development capabilities; long-standing bugs in the root architecture that couldn't be fixed, some rather...interesting...design decisions, and convoluted logic that increased the complexity of any bugfix or new feature tenfold. So, we set out to fix this. Earlier this year, a new engine was written on which SmartAssembly would be based. Over the following few months, each feature was ported over to the new engine and extensively tested by our existing unit and integration tests. The engine was linked into the existing UI (no easy task, due to the tight coupling between the UI and old engine), and existing RedGate products were tested on the new SmartAssembly to ensure the new engine acted in the same way. The result is SmartAssembly 6.5. The risks of a rearchitecture Are there risks to rearchitecting a product like SmartAssembly? Of course. There was a lot of undocumented behaviour in the old engine, and as part of the rearchitecture we had to find this behaviour, define it, and document it. In the process we found some behaviour of the old engine that simply did not make sense; hence the changes in pruning & obfuscation behaviour in the release notes. All the special edge cases we had to find, document, and re-implement. There was a chance that these special cases would not be found until near the end of the project, when everything is functionally complete and interacting together. By that stage, it would be hard to go back and change anything without a whole lot of extra work, delaying the release by months. We always knew this was a possibility; our initial estimate of the time required was '4 months, ± 4 months'. And that was including various mitigation strategies to reduce the likelihood of these issues being found right at the end. Fortunately, this worst-case did not happen. However, the rearchitecture did produce some benefits. As well as numerous bug fixes that we could not fix any other way, we've also added logging that lets you find out exactly why a particular field or property wasn't pruned or obfuscated. There's a new command line interface, we've tested it with WP7.1 and Silverlight 5, and we've added a new option to error reporting to improve the performance of instrumented apps by ~10%, at the cost of inaccurate line numbers in reports. So? What differences will I see? Largely none. SmartAssembly 6.5 produces the same output as SmartAssembly 6.2. The performance of 6.5 will be much faster for some users, and generally the same as 6.2 for the remaining. If you've encountered a bug with previous versions of SmartAssembly, I encourage you to try 6.5, as it has most likely been fixed in the rearchitecture. If you encounter a bug with 6.5, please do tell us; we'll be doing another release quite soon, so we'll aim to fix any issues caused by 6.5 in that release. Most importantly, the new architecture finally allows us to implement some Big Things with SmartAssembly we've been planning for many months; these will fundamentally change how you build, release and monitor your application. Stay tuned for further updates!

    Read the article

  • MEB: Taking Incremental Backup using last successful backup

    - by Sagar Jauhari
    Introduction In MySQL Enterprise Backup v3.7.0 (MEB 3.7.0) a new option '–incremental-base' was introduced. Using this option a user can take in incremental backup without specifying the '–start-lsn' option. Description of this option can be found here. Instead of '–start-lsn' the user can provide the location of the last full backup or incremental backup using the 'dir:' prefix. MEB would extract the end LSN of this backup from the mysql.backup_history table as well as the backup_variables.txt file (for verification) to use it as the start LSN of the incremental backup. Because of popular demand, in MEB 3.7.1 the option '-incremental-base' has been extended further. The idea is to allow the user to take an incremental backup as easily as possible using the '–incremental-base' option. With the new option MEB queries the backup_history table for the last successful backup and uses its end LSN as the start LSN for the new incremental backup. It should be noted that the last successful backup is used irrespective of the location of the backup. Details A new prefix 'history:' has been introduced for the –incremental-base option and currently the only permissible value is the string "last_backup". So using the new option an incremental backup can be taken with the following command: $ mysqlbackup --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup When MEB attempts to extract the end LSN of the last successful backup from the mysql.backup_history table, it also scans the corresponding backup destination for the old backup and tries to read the meta files at this backup destination. If a valid backup still exists at the backup destination and the meta files can be read, MEB compares the end LSN found in the mysql.backup_history table with the end LSN found in the backup meta files of the old backup. Assuming that the host MySQL server is alive and mysql.backup_history can be accessed by MEB, the behaviour of MEB with respect to verification of the old end LSN can be summarized as follows: If 'BD' is the backup destination of the last successful backup in mysql.backup_history table and 'BHT' is the mysql.backup_history table if can_read_files_at_BD:     if end_lsn_found_at_BD == end_lsn_of_last_backup_in_BHT:         continue_with_backup()     else         return_with_error() else     continue_with_backup() Advantages Apart from ease of usability an important advantage of this option is that the user can do repeated incremental backups without changing the command line. This is possible using the '–with-timestamp' option along with this new option. For example, the following command $ mysqlbackup --with-timestamp --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup  can be used to perform successive incremental backups in the directory /media/mysqlbackup-repo . Limitations The option '--incremental-base=history:last_backup' should not be used when the user takes different kinds of concurrent backups on the same MySQL server (say different partial backups at multiple locations). should not be used after any temporary or experimental backups performed on the server (which where successful!). needs to be used with precaution since any intermediate successful backup without the –no-connection will be used as the base backup for the next incremental backup.  will give an error in case a valid backup exists at the location of the last successful backup and whose end LSN is different from that of the last successful backup found in the backup_history table. Date: 2012-06-19 HTML generated by org-mode 6.33x in emacs 23

    Read the article

  • 5 Ways to Celebrate the Release of Internet Explorer 9

    - by David Wesst
    The day has finally come: Microsoft has released a web browser that is awesome. On Monday night, Microsoft officially introduced the world to the latest edition to its product family: Internet Explorer 9. That makes March 14, 2011 (also known as PI day) the official birthday of Microsoft’s rebirth in the world of web browsing. Just like any big event, you take some time to celebrate. Here are a few things that you can do to celebrate the return of Internet Explorer. 1. Download It If you’re not a big partier, that’s fine. The one thing you can do (and definitely should) is download it and give it a shot. Sure, IE may have disappointed you in the past, but believe me when I say they really put the effort in this time. The absolute least you can do is give it a shot to see how it stands up against your favourite browser. 2. Get yourself an HTML5 Shirt One of the coolest, if not best parts of IE9 being released is that it officially introduces HTML5 as a fully supported platform from Microsoft. IE9 supports a lot of what is already defined in the HTML5 technical spec, which really demonstrates Microsoft’s support of the new standard. Since HTML5 is cool on the web, it means that it is cool to wear it too. Head over to html5shirt.com and get yourself, or your staff, or your whole family, an HTML5 shirt to show the real world that you are ready for the future of the web. 3. HTML5-ify Something Okay, so maybe a shirt isn’t enough for you. Maybe you need start using HTML5 for real. If you have a blog, or a website, or anything out there on the web, celebrate IE9 adding some HTML5 to your site. Whether that is updating old code, adding something new, or just changing your WordPress theme, definitely take a look at what HTML5 can do for you. 4. Help Kill Old IE and Upgrade your Organization See this? This is sad. Upgrading web browsers in an large enterprise or organization is not a trivial task. A lot of companies will use the excuse of not having the resources to upgrade legacy web applications they were built for a specific version of IE and it doesn’t render correctly in legacy browsers. Well, it’s time to stop the excuses. IE9 allows you to define what version of Internet Explorer you would like it to emulate. It takes minimal effort for the developer, and will get rid of the excuses. Show your IT manager or software development team this link and show them how easy it is to make old code render right in the latest and greatest from the IE team. 5. Submit an Entry for DevUnplugged So, you’ve made it to number five eh? Well then, you must be pretty hardcore to make it this far down the list. Fine, let’s take it to the next level and build an HTML5 game. That’s right. A game. Like a video game. HTML5 introduces some amazing new features that can let you build working video games using HTML5, CSS3, and JavaScript. Plus, Microsoft is celebrating the launch of IE9 with a contest where you can submit an HTML5 game (or audio application) and have a chance to win a whack of cash and other prizes. Head here for the full scoop and rules for the DevUnplugged. This post also appears at http://david.wes.st

    Read the article

  • Projet Doneness and Einstein's Razor

    - by Malcolm Anderson
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} I’ve started working on a series of articles about the value of having testers involved in requirements gathering.  Today I was reminded of a useful tool that has provided value to me for at least 20 years.  To those of you who already use this tool, I’m interested in your stories where it has made a difference for you, and to those of you who have never heard of it, I hope sharing it will make a difference in your careers.   I was reminded of it because I just finished a 3 month set of personal projects and was reviewing the success of those projects while putting together my next set of 3 month projects.  During this review, I noticed that a good number of my projects did not have the level of success that I wanted.  The results were good, but they could have been better.  Then it hit me, I didn’t have clear enough doneness criteria.  As a Scrum Practitioner, I wouldn’t think of running a sprint without reviewing the backlog with Einstein's Razor, so why wouldn’t I do the same for my own projects?    I can hear a few of you asking "What's Einstein's Razor?"   I'm glad you asked.  I was once told that Einstein told an audience, "If you can't explain what you do to a relatively bright six year old, you probably don't understand it yourself."    This quote had an impact on me, especially early in my career as a solo developer.  At the time, I was mostly doing end to end software development.  I found that I saved myself a lot of pain and trouble by turning that quote around to “If you can't explain your project's doneness criteria in such a way that a relatively bright six year old can't competently determine your projects success or failure, then you have not broken it down to a fine enough level.”  There are more negatives in that quote than I’m happy with, but it still gives me tons of value to this day.     In your opinion, in your current projects, could a 6 year old competently pass or fail your next sprint?  What risks are you running if your answer is “No” ?

    Read the article

  • MySQL Server 5.6 default my.cnf and my.ini

    - by user12626240
    We've introduced a default my.cnf / my.ini file for MySQL Server that you can now see in the 5.6.8 release candidate: # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/5.6/en/server-configuration-defaults.html [mysqld] # Remove leading # and set to the amount of RAM for the most important data # cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%. # innodb_buffer_pool_size = 128M   # Remove leading # to turn on a very important data integrity option: logging # changes to the binary log between backups. # log_bin   # These are commonly set, remove the # and set as required. # basedir = ..... # datadir = ..... # port = ..... # socket = ..... # server_id = .....   # Remove leading # to set options mainly useful for reporting servers. # The server defaults are faster for transactions and fast SELECTs. # Adjust sizes as needed, experiment to find the optimal values. # join_buffer_size = 128M # sort_buffer_size = 2M # read_rnd_buffer_size = 2M   sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES    There is also a template file called my-default.cnf or my-default.ini that has these lines near the start: # *** DO NOT EDIT THIS FILE. It's a template which will be copied to the # *** default location during install, and will be replaced if you # *** upgrade to a newer version of MySQL.   On Linux systems, the mysql_install_db command will copy the template file to the final location, where the server will read and use the file, removing the extra three lines. On Windows, the installer will create extra settings based on the answers you gave during installation. Neither will overwrite an existing my.cnf or my.ini file. The only initially active setting here is to change the value of  sql_mode from the server default of NO_ENGINE_SUBSTITUTION to NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES. This strict mode changes warnings for some non-standard behaviour into errors. This can cause applications which rely on the non-standard things, like dates that aren't valid, to lose data. If we had just changed the server default, the new setting would affect all servers that lack an explicit sql_mode setting, including those where strict mode is harmful. So we did it in the default file instead because that will only affect new server installations. You should expect that in our next version after 5.6, the server default will include STRICT_TRANS_TABLES. Our Windows installer and some of our connectors already use STRICT_TRANS_TABLES by default. Strict has been our preferred setting for many years and it is good to see some development platforms are using it. If you need the old behaviour, just remove the STRICT_TRANS_TABLES setting. If you do this, please also ask your application provider to make it unnecessary. They can do that by setting the session sql_mode setting in their own connections, so the rest of the applications using the server don't have to have an undesirable default. We've kept this file as small as possible because we found that our old files were too big and confused people. We've also now removed the old my-huge and related example files. One key part of this is the link to the documentation, where we will provide an introduction to some key settings. We'd like to hear your feedback on settings that will benefit most users or are most important to call out for existing users. Please do that by commenting here or if you prefer by adding comments to this bug report.

    Read the article

  • JDK7?????

    - by Todd Bao
    ??JSR 334,JDK 7?????????????,???:1. switch????String????:    public static void switchString(String s){        switch (s){        case "db": ...        case "wls": ...        case "idm": ...        case "soa": ...        case "fa": ...        default: ...        }    }2. ?????????? - "0b"???"_"???,?????????? a. ???????????, 0b: ????????????:        byte b1 = 0b00100001;     // New        byte b2 = 0x21;        // Old        byte b3 = 33;        // Old b. ??????????????,?????,????????????:    long phone_nbr = 021_1111_2222;3. ???????????? - "?? new",????????:        ArrayList<String> al1 = new ArrayList<String>();    // Old        ArrayList<String> al2 = new ArrayList<>();        // New4. ????reflect?????????? - ReflectOperationException,????:    ClassNotFoundException,     IllegalAccessException,     InstantiationException,     InvocationTargetException,     NoSuchFieldException,     NoSuchMethodException5. catch????????,?????????,????????:    try{        // code    }    catch (SQLException | IOException ex) {        // ...    }6. ?????? - ??????????,??????????style:    public void test() throws NoSuchMethodException, NoSuchFieldException{    // ??        try{            // code        }        catch (RelectiveOperationException ex){    // ??            throws ex;        }    }7. ???try()?? - Try with Resources,?????????????(???????????java.lang.AutoCloseable??),???????try?????"("??"{":    try(BufferedReader br = new BufferedReader(new FileReader("/home/oracle/temp.txt"))){        ... br.readLine() ...    }try-with-resources?????catch,??????????catch???????Todd

    Read the article

  • E: Sub-process /usr/bin/dpkg returned an error code (1) seems to be choking on kde-runtime-data version issue

    - by BMT
    12.04 LTS, on a dell mini 10. Install stable until about a week ago. Updated about 1x a week, sometimes more often. Several days ago, I booted up and the system was no longer working correctly. All these symptoms occurred simultaneously: Cannot run (exit on opening, every time): Update manager, software center, ubuntuOne, libreOffice. Vinagre autostarts on boot, no explanation, not set to startup with Ubuntu. Using apt-get to fix install results in the following: maura@pandora:~$ sudo apt-get -f install Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following package was automatically installed and is no longer required: libtelepathy-farstream2 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: gwibber gwibber-service kde-runtime-data software-center Suggested packages: gwibber-service-flickr gwibber-service-digg gwibber-service-statusnet gwibber-service-foursquare gwibber-service-friendfeed gwibber-service-pingfm gwibber-service-qaiku unity-lens-gwibber The following packages will be upgraded: gwibber gwibber-service kde-runtime-data software-center 4 upgraded, 0 newly installed, 0 to remove and 39 not upgraded. 20 not fully installed or removed. Need to get 0 B/5,682 kB of archives. After this operation, 177 kB of additional disk space will be used. Do you want to continue [Y/n]? debconf: Perl may be unconfigured (Can't locate Scalar/Util.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/lib/perl/5.14/Hash/Util.pm line 9. BEGIN failed--compilation aborted at /usr/lib/perl/5.14/Hash/Util.pm line 9. Compilation failed in require at /usr/share/perl/5.14/fields.pm line 122. Compilation failed in require at /usr/share/perl5/Debconf/Log.pm line 10. Compilation failed in require at (eval 1) line 4. BEGIN failed--compilation aborted at (eval 1) line 4. ) -- aborting (Reading database ... 242672 files and directories currently installed.) Preparing to replace gwibber 3.4.1-0ubuntu1 (using .../gwibber_3.4.2-0ubuntu1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace gwibber-service 3.4.1-0ubuntu1 (using .../gwibber-service_3.4.2-0ubuntu1_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace kde-runtime-data 4:4.8.3-0ubuntu0.1 (using .../kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb) ... Unpacking replacement kde-runtime-data ... dpkg: error processing /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb (--unpack): trying to overwrite '/usr/share/sounds', which is also in package sound-theme-freedesktop 0.7.pristine-2 dpkg-deb (subprocess): subprocess data was killed by signal (Broken pipe) dpkg-deb: error: subprocess <decompress> returned error exit status 2 Preparing to replace python-crypto 2.4.1-1 (using .../python-crypto_2.4.1-1_i386.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace software-center 5.2.2.2 (using .../software-center_5.2.4_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/software-center_5.2.4_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Preparing to replace xdiagnose 2.5 (using .../archives/xdiagnose_2.5_all.deb) ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: warning: subprocess old pre-removal script returned error exit status 1 dpkg - trying script from the new package instead ... Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pyclean", line 25, in <module> import logging ImportError: No module named logging dpkg: error processing /var/cache/apt/archives/xdiagnose_2.5_all.deb (--unpack): subprocess new pre-removal script returned error exit status 1 No apport report written because MaxReports is reached already Could not find platform dependent libraries <exec_prefix> Consider setting $PYTHONHOME to <prefix>[:<exec_prefix>] Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging Error in sys.excepthook: Traceback (most recent call last): File "/usr/lib/python2.7/dist-packages/apport_python_hook.py", line 64, in apport_excepthook from apport.fileutils import likely_packaged, get_recent_crashes File "/usr/lib/python2.7/dist-packages/apport/__init__.py", line 1, in <module> from apport.report import Report File "/usr/lib/python2.7/dist-packages/apport/report.py", line 16, in <module> from xml.parsers.expat import ExpatError File "/usr/lib/python2.7/xml/parsers/expat.py", line 4, in <module> from pyexpat import * ImportError: No module named pyexpat Original exception was: Traceback (most recent call last): File "/usr/bin/pycompile", line 27, in <module> import logging ImportError: No module named logging dpkg: error while cleaning up: subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: /var/cache/apt/archives/gwibber_3.4.2-0ubuntu1_i386.deb /var/cache/apt/archives/gwibber-service_3.4.2-0ubuntu1_all.deb /var/cache/apt/archives/kde-runtime-data_4%3a4.8.4-0ubuntu0.1_all.deb /var/cache/apt/archives/python-crypto_2.4.1-1_i386.deb /var/cache/apt/archives/software-center_5.2.4_all.deb /var/cache/apt/archives/xdiagnose_2.5_all.deb E: Sub-process /usr/bin/dpkg returned an error code (1) maura@pandora:~$ ^C maura@pandora:~$

    Read the article

  • Combobox with collection view itemssource does not update selection box item on changes to the Model

    - by Vinit Sankhe
    Hello, Sorry for the earlier lengthy post. Here is my concise (!) description. I bind a collection view to a combobox as a itemsSource and also bind its selectedvalue with a property from my view model. I must keep IsSynchronizedWithCurrentItem="False". I change the source list ofr the view and then refresh the view. The changed (added, removed, edited) items appear correctly in the item list of the combo. But problem is with the selected item. When I change its property which is also the displaymember path of the combo, the changed property value does not reflect back on the selecton box of the combo. If you open the combo dropdown it appears correctly on the item list but not on the selection box. Now if I change the combobox tag to Listbox in my XAML (keeping all attributes as it is) then when selected item's displaymember property value is updated, the changes reflect back on the selected item of the list box . Why this issue? Just FYI: My View Model has properties EmployeeCollectionView and SelectedEmployeeId which are bound to combo as ItemsSource and SelectedValue resp. This VM implements the INotifyPropertyChanged interface. My core employee class (list of which is the source for the EmployeeCollectionView) is simply a Model class without INotifyPropertyChanged. DisplayMemberPath is "Name" property of employee Model class. I change this by some means and expect the combo selection box to update the value. I tried refreshing ther SelectedEmployeeId by setting it 0 (where it correctly selects the dummy "-- Select All --" employee entry from itemsSource) and old selected value back. But no use. The old value takes me back to the old label. Items collection has latest entry though. When I make combobox's IsEditable=True before the view's refresh and after refresh I make IsEditable=False then the things work out correctly! But this is a patch and is unnecessary. Thx Vinit Sankhe

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

  • should I use Entity Framework instead of raw ADO.NET

    - by user110182
    I am new to CSLA and Entity Framework. I am creating a new CSLA / Silverlight application that will replace a 12 year old Win32 C++ system. The old system uses a custom DCOM business object library and uses ODBC to get to SQL Server. The new system will not immediately replace the old system -- they must coexist against the same database for years to come. At first I thought EF was the way to go since it is the latest and greatest. After making a small EF model and only 2 CSLA editable root objects (I will eventually have hundreds of objects as my DB has 800+ tables) I am seriously questioning the use of EF. In the current system I have the need many times to do fine detail performance tuning of the queries which I can do because of 100% control of generated SQL. But it seems in EF that so much happens behind the scenes that I lose that control. Article like http://toomanylayers.blogspot.com/2009/01/entity-framework-and-linq-to-sql.html don't help my impression of EF. People seem to like EF because of LINQ to EF but since my criteria is passed between client and server as criteria object it seems like I could build queries just as easily without LINQ. I understand in WCF RIA that there is query projection (or something like that) where I can do client side LINQ which does move to the server before translation into actual SQL so in that case I can see the benefit of EF, but not in CSLA. If I use raw ADO.NET, will I regret my decision 5 years from now? Has anyone else made this choice recently and which way did you go?

    Read the article

< Previous Page | 61 62 63 64 65 66 67 68 69 70 71 72  | Next Page >