Search Results

Search found 10383 results on 416 pages for 'exact match'.

Page 271/416 | < Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >

  • Cannot find root device after latest kernel upgrade

    - by DisgruntledGoat
    I'm running Ubuntu 13.04. Yesterday I tried to install updates but there was an error, and it suggested running apt-get -f install which I did. Now when I try to boot, I get an error "Gave up waiting for root device". The text is almost identical to the text shown in this and this question. However, the "built-in shell" simply doesn't work! Nothing I type shows up on the screen or does anything. I tried adding a rootdelay to grub but it just waits longer and shows the same screen. Loading the previous kernel works (although there are a few graphics glitches) but as far as I can tell, it should be booting the exact same stuff. The new kernel is 3.8.0-31-generic and the previous working one is 3.8.0-25-generic. Here is my entire /boot/grub/menu.lst file, comments removed: default 0 timeout 3 title Ubuntu 13.04, kernel 3.8.0-31-generic uuid c690c1e6-beb9-46e7-85c2-145cd07d44ac kernel /boot/vmlinuz-3.8.0-31-generic root=UUID=c690c1e6-beb9-46e7-85c2-145cd07d44ac rootdelay=120 ro quiet splash initrd /boot/initrd.img-3.8.0-31-generic quiet title Ubuntu 13.04, kernel 3.8.0-31-generic (recovery mode) uuid c690c1e6-beb9-46e7-85c2-145cd07d44ac kernel /boot/vmlinuz-3.8.0-31-generic root=UUID=c690c1e6-beb9-46e7-85c2-145cd07d44ac ro single initrd /boot/initrd.img-3.8.0-31-generic title Ubuntu 13.04, kernel 3.8.0-25-generic uuid c690c1e6-beb9-46e7-85c2-145cd07d44ac kernel /boot/vmlinuz-3.8.0-25-generic root=UUID=c690c1e6-beb9-46e7-85c2-145cd07d44ac ro quiet splash initrd /boot/initrd.img-3.8.0-25-generic quiet title Ubuntu 13.04, kernel 3.8.0-25-generic (recovery mode) uuid c690c1e6-beb9-46e7-85c2-145cd07d44ac kernel /boot/vmlinuz-3.8.0-25-generic root=UUID=c690c1e6-beb9-46e7-85c2-145cd07d44ac ro single initrd /boot/initrd.img-3.8.0-25-generic title Ubuntu 13.04, kernel 3.8.0-23-generic uuid c690c1e6-beb9-46e7-85c2-145cd07d44ac kernel /boot/vmlinuz-3.8.0-23-generic root=UUID=c690c1e6-beb9-46e7-85c2-145cd07d44ac ro quiet splash initrd /boot/initrd.img-3.8.0-23-generic quiet title Ubuntu 13.04, kernel 3.8.0-23-generic (recovery mode) uuid c690c1e6-beb9-46e7-85c2-145cd07d44ac kernel /boot/vmlinuz-3.8.0-23-generic root=UUID=c690c1e6-beb9-46e7-85c2-145cd07d44ac ro single initrd /boot/initrd.img-3.8.0-23-generic title Ubuntu 13.04, memtest86+ uuid c690c1e6-beb9-46e7-85c2-145cd07d44ac kernel /boot/memtest86+.bin quiet title -------------------------------- root title Windows Vista rootnoverify (hd0,2) savedefault makeactive chainloader +1 As you can see the UUID is the same for all kernels. Why am I getting this problem, and what can I do to fix it?

    Read the article

  • 10 PowerShell One Liners

    - by BizTalk Visionary
    Here are a few one-liners that use NetCmdlets. Some of these I've blogged about before, some are new. Let me know if you have questions, which ones you find useful, or how you altered these to suit your own needs. Send email to a list of recipient addresses: import-csv users.csv | % { send-email -to $_.email -from [email protected] -subject "Important Email" –message "Hello World!" -server 10.0.1.1 } Show the access control list for a specific Exchange folder: get-imap -server $mymailserver -cred $mycred -folder INBOX.RESUMES –acl Add look and read permissions on an Exchange folder, for a list of accounts pulled from a CSV file: import-csv users.csv | % { set-imap -server -acluser $_.username $mymailserver -cred $mycred -folder INBOX.RESUMES –acl “lr”  } Sync system time with an Internet time server: get-time -server clock.psu.edu –set To remotely sync the time on a set of computers: import-csv computers.csv | % { Invoke-Command -computerName $_.computer -cred $mycred -scriptblock { get-time -server clock.psu.edu –set } } Delete all emails from an Exchange folder that match a certain criteria.  For example, delete all emails from [email protected]: get-imap -server $mailserver –cred $mycred | ? {$_.FromEmail -eq [email protected]} | %{ set-imap -server $mailserver –cred $mycred-message $_.Id -delete } Update Twitter status from PowerShell: get-http –url "http://twitter.com/statuses/update.xml" –cred $mycred -variablename status -variablevalue "Tweeting with NetCmdlets!" A test-path that works over FTP, FTPS (SSL), and SFTP (SSH) connections: get-ftp -server $remoteserver –cred $mycred -path /remote/path/to/checkfor* Don't forget the *.  Also, to use SSL or SSH just add an –ssl or –ssh parameter. List disabled user accounts in Active Directory (or any other LDAP server): get-ldap -server $ad -cred $mycred -dn dc=yourdc -searchscope wholesubtree     -search "(&(objectclass=user)(objectclass=person)(company=*)(userAccountControl:1.2.840.113556.1.4.803:=2))" List Active Directory groups and their members: get-ldap -server testman -cred $mycred -dn dc=NS2 -searchscope wholesubtree -search "(&(objectclass=group)(cn=*admin*))" | select ResultDN, member Display the last initialization time (e.g. last reboot time) of all discoverable SNMP agents on a network: import-csv computers.csv | % { get-snmp -agent $_.computer -oid sysUpTime.0 | %{([datetime]::Now).AddSeconds(-($_.OIDValue/100))} } Not mentioned here:  data conversion (Yenc, QP, UUencoding, MD5, SHA1, base64, etc), DNS, News Groups (NNTP/UseNet), POP mail, RSS feeds, Amazon S3, Syslog, TFTP, TraceRoute, SNMP Traps, UDP, WebDAV, whois, Rexec/Rshell/Telnet, Zip files, sending IMs (Jabber/GoogleTalk/XMPP), sending text messages and pages, ping, and more. Original Source: Lance's Textbox

    Read the article

  • Do You Know How OUM defines the four, basic types of business system testing performed on a project? Why not test your knowledge?

    - by user713452
    Testing is perhaps the most important process in the Oracle® Unified Method (OUM). That makes it all the more important for practitioners to have a common understanding of the various types of functional testing referenced in the method, and to use the proper terminology when communicating with each other about testing activities. OUM identifies four basic types of functional testing, which is sometimes referred to as business system testing.  The basic functional testing types referenced by OUM include: Unit Testing Integration Testing System Testing, and  Systems Integration Testing See if you can match the following definitions with the appropriate type above? A.  This type of functional testing is focused on verifying that interfaces/integration between the system being implemented (i.e. System under Discussion (SuD)) and external systems functions as expected. B.     This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the several custom components developed to satisfy a given requirement (e.g. screen, program, report, etc.) interact with one another as designed. C.  This type of functional testing is focused on verifying that the functionality within the system being implemented (i.e. System under Discussion (SuD)), functions as expected.  This includes out-of-the -box functionality delivered with Commercial Off-The-Shelf (COTS) applications, as well as, any custom components developed to address gaps in functionality.  D.  This type of functional testing is performed for custom software components only, is typically performed by the developer of the custom software, and is focused on verifying that the individual custom components developed to satisfy a given requirement  (e.g. screen, program, report, etc.) functions as designed.   Check your answers below: (D) (B) (C) (A) If you matched all of the functional testing types to their definitions correctly, then congratulations!  If not, you can find more information in the Testing Process Overview and Testing Task Overviews in the OUM Method Pack.

    Read the article

  • Build Open JDK 7 on Mac OSX (TOTD #172)

    - by arungupta
    The complete requirements, pre-requisites, and steps to build OpenJDK 7 port on Mac OSX are described here. The steps are very clearly explained and here are the exact ones I followed on my MacBook Pro 10.7.2: Confirm the version of pre-installed Java as: > java -versionjava version "1.6.0_26"Java(TM) SE Runtime Environment (build 1.6.0_26-b03-383-11A511c)Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02-383, mixed mode) Download and install Mercurial from mercurial.berkwood.com (zip bundle for 10.7 is here). It gets installed in the /usr/local/bin directory. Get the source code as (commands highlighted in bold): hg clone http://hg.openjdk.java.net/macosx-port/macosx-port destination directory: macosx-port requesting all changes adding changesets adding manifests adding file changes added 437 changesets with 364 changes to 33 files updating to branch default 31 files updated, 0 files merged, 0 files removed, 0 files unresolved cd macosx-port chmod 7555 get_source.sh ./get_source.sh # Repos:  corba jaxp jaxws langtools jdk hotspot Starting on corba Starting on jaxp Starting on jaxws Starting on langtools Starting on jdk Starting on hotspot # hg clone http://hg.openjdk.java.net/macosx-port/macosx-port/corba corba requesting all changes adding changesets adding manifests adding file changes added 396 changesets with 3275 changes to 1379 files . . . # exit code 0 # cd ./corba && hg pull -u pulling from http://hg.openjdk.java.net/macosx-port/macosx-port/corba searching for changes no changes found # exit code 0 # cd ./jaxp && hg pull -u pulling from http://hg.openjdk.java.net/macosx-port/macosx-port/jaxp searching for changes no changes found # exit code 0 Install Xcode from the App Store. Include /Developer/usr/bin in PATH. Note: JDK 1.6.0_26 ame pre-installed on my laptop and I installed Xode after that. The compilation went fine and there was no need to re-install the Java for Mac OS X as mentioned in the original steps. Build the code as: make ALLOW_DOWNLOADS=true SA_APPLE_BOOT_JAVA=true ALWAYS_PASS_TEST_GAMMA=true ALT_BOOTDIR=`/usr/libexec/java_home -v 1.6` HOTSPOT_BUILD_JOBS=`sysctl -n hw.ncpu` The final output is shown as: >>>Finished making images @ Sat Nov 19 00:59:04 WET 2011 ... >>>Finished making images @ Sat Nov 19 00:59:04 WET 2011 ...############################################################################# Leaving jdk for target(s) sanity all docs images ################################################################################## Build time 00:17:42 jdk for target(s) sanity all docs images ############################################################################### Build times ##########Target all_product_buildStart 2011-11-19 00:32:40End 2011-11-19 00:59:0400:01:46 corba00:04:07 hotspot00:00:51 jaxp00:01:21 jaxws00:17:42 jdk00:00:37 langtools00:26:24 TOTAL######################### Change the directory and verify the version: >cd build/macosx-universal/j2sdk-image/1.7.0.jdk/Contents/Home/bin >./java -version openjdk version "1.7.0-internal" OpenJDK Runtime Environment (build 1.7.0-internal-arungup_2011_11_19_00_32-b00) OpenJDK 64-Bit Server VM (build 21.0-b17, mixed mode) Now go fix some bugs, file new bugs, or discuss at the macosx-port-dev mailing list.

    Read the article

  • New regular expression features in PCRE 8.34 and 8.35

    - by Jan Goyvaerts
    PCRE 8.34 adds some new regex features and changes the behavior of a few to make it better compatible with the latest versions of Perl. There are no changes to the regex syntax in PCRE 8.35. \o{377} is now an octal escape just like \377. This syntax was first introduced in Perl 5.12. It avoids any confusion between octal escapes and backreferences. It also allows octal numbers beyond 377 to be used. E.g. \o{400} is the same as \x{100}. If you have any reason to use octal escapes instead of hexadecimal escapes then you should definitely use the new syntax. Because of this change, \o is now an error when it doesn’t form a valid octal escape. Previously \o was a literal o and \o{377} was a sequence of 337 o‘s. In free-spacing mode, whitespace between a quantifier and the ? that makes it lazy or the + that makes it possessive is now ignored. In Perl this has always been the case. In PCRE 8.33 and prior, whitespace ended a quantifier and any following ? or + was seen as a second quantifier and thus an error. The shorthand \s now matches the vertical tab character in addition to the other whitespace characters it previously matched. Perl 5.18 made the same change. Many other regex flavors have always included the vertical tab in \s, just like POSIX has always included it in [[:space:]]. Names of capturing groups are no longer allowed to start with a digit. This has always been the case in Perl since named groups were added to Perl 5.10. PCRE 8.33 and prior even allowed group names to consist entirely of digits. [[:<:]] and [[::]] are now treated as POSIX-style word boundaries. They match at the start and the end of a word. Though they use similar syntax, these have nothing to do with POSIX character classes and cannot be used inside character classes. Perl does not support POSIX word boundaries. The same changes affect PHP 5.5.10 (and later) and R 3.0.3 (and later) as they have been updated to use PCRE 8.34. RegexBuddy and RegexMagic have been updated to support the latest versions of PCRE, PHP, and R. Older versions that were previously supported are still supported, so you can compare or convert your regular expressions between the latest versions of PCRE, PHP, and R and whichever version you were using previously.

    Read the article

  • Deploying an ADF Secure Application using WLS Console

    - by juan.ruiz
    Last week I worked on a requirement from a customer that wanted to understand how to deploy to WLS an application with ADF Security without using JDeveloper. The main question was, what steps where needed in order to set up Enterprise Roles, Security Policies and Application Credentials. In this entry I will explain the steps taken using JDeveloper 11.1.1.2. 0 Requirements: Instead of building a sample application from scratch, we can use Andrejus 's sample application that contains all the security pieces that we need. Open and migrate the project. Also make sure you adjust the database settings accordingly. Creating the EAR file Review the Security settings of the application by going into the Application -> Secure menu and see that there are two enterprise roles as well as the ADF Policies enforcing security on the main page. Make sure the Application Module uses the Data Source instead of JDBC URL for its connection type, also take note of the data source name - in my case I have: java:comp/env/jdbc/HrDS To facilitate the access to this application once we deploy it. Go to your ViewController project properties select the Java EE Application category and give it a meaningful name to the context root as well to the Application Name Go to the ADFSecurityWL Application properties -> Deployment  and create a new EAR deployment profile. Uncheck the Auto generate and Synchronize weblogic-jdbc.xml Descriptors During Deployment Deploy the application as an EAR file. Deploying the Application to WLS using the WLS Console On the WLS console create a JNDI data source. This is the part that I found more tricky of the hole exercise given that the name should match the AM's data source name, however the naming convention that worked for me was jdbc.HrDS Now, deploy the application manually by selecting deployments ->Install look for the EAR and follow the default steps. If this is the firs time you deploy the application, once the deployment finishes you will be asked to Activate Changes on the domain, these changes contain all the security policies and application roles insertion into the WLS instance. Creating Roles and User Groups for the Application To finish the after-deployment set up, we need to create the groups that are the equivalent of the Enterprise Roles of ADF Security. For our sample we have two Enterprise Roles employeesApplication and managersApplication. After that, we create the application users and assign them into their respective groups. Now we can run the application and test the security constraints

    Read the article

  • Have you really fixed that problem?

    - by DavidWimbush
    The day before yesterday I saw our main live server's CPU go up to constantly 100% with just the occasional short drop to a lower level. The exact opposite of what you'd want to see. We're log shipping every 15 minutes and part of that involves calling WinRAR to compress the log backups before copying them over. (We're on SQL2005 so there's no native compression and we have bandwidth issues with the connection to our remote site.) I realised the log shipping jobs were taking about 10 minutes and that most of that was spent shipping a 'live' reporting database that is completely rebuilt every 20 minutes. (I'm just trying to keep this stuff alive until I can improve it.) We can rebuild this database in minutes if we have to fail over so I disabled log shipping of that database. The log shipping went down to less than 2 minutes and I went off to the SQL Social evening in London feeling quite pleased with myself. It was a great evening - fun, educational and thought-provoking. Thanks to Simon Sabin & co for laying that on, and thanks too to the guests for making the effort when they must have been pretty worn out after doing DevWeek all day first. The next morning I came down to earth with a bump: CPU still at 100%. WTF? I looked in the activity monitor but it was confusing because some sessions have been running for a long time so it's not a good guide what's using the CPU now. I tried the standard reports showing queries by CPU (average and total) but they only show the top 10 so they just show my big overnight archiving and data cleaning stuff. But the Profiler showed it was four queries used by our new website usage tracking system. Four simple indexes later the CPU was back where it should be: about 20% with occasional short spikes. So the moral is: even when you're convinced you've found the cause and fixed the problem, you HAVE to go back and confirm that the problem has gone. And, yes, I have checked the CPU again today and it's still looking sweet.

    Read the article

  • How to Run Apache Commands From Oracle HTTP Server 11g Home

    - by Daniel Mortimer
    Every now and then you come across a problem when there is nothing in the "troubleshooting manual" which can help you. Instead you need to think outside the box. This happened to me two or three years back. Oracle HTTP Server (OHS) 11g did not start. The error reported back by OPMN was generic and gave no clue, and worse the HTTP Server error log was empty, and remained so even after I had increased the OPMN and HTTP Server log levels. After checking configuration files, operating system resources, etc I was still no nearer the solution. And then the light bulb moment! OHS is based on Apache - what happens if I attempt to start HTTP Server using the native apache command. Trouble was the OHS 11g solution has its binaries and configuration files in separate "home" directories ORACLE_HOME contains the binaries ORACLE_INSTANCE contains the configuration files How to set the environment so that native apache commands run without error? Eventually, with help from a colleague, the knowledge articleHow to Start Oracle HTTP Server 11g Without Using opmnctl [ID 946532.1]was born! To be honest, I cannot remember the exact cause and solution to that OHS problem two or three years ago. But, I do remember that an attempt to start HTTP Server using the native apache command threw back an error to the console which led me to discover the culprit was some unusual filesystem fault.The other day, I was asked to review and publish a new knowledge article which described how to use the apache command to dump a list of static and shared loaded modules. This got me thinking that it was time [ID 946532.1] was given an update. The resultHow To Run Native Apache Commands in an Oracle HTTP Server 11g Environment [ID 946532.1] Highlights: Title change Improved environment setting scripts Interactive, should be no need to manually edit the scripts (although readers are welcome to do so) Automatically dump out some diagnostic information Inclusion of some links to other troubleshooting collateral To view the knowledge article you need a My Oracle Support login. For convenience, you can obtain the scripts via the links below.MS Windows:Wrapper cmd script - calls main cmd script [After download, remove the ".txt" file extension]Main cmd script - sets OHS 11g environment to run Apache commands [After download, remove the ".txt" file extension]Unix:Shell script - sets OHS 11g environment to run Apache commands on Unix Please note: I cannot guarantee that the scripts held in the blog repository will be maintained. Any enhancements or faults will applied to the scripts attached to the knowledge article. Lastly, to find out more about native apache commands, refer to the Apache Documentation apachectl - Apache HTTP Server Control Interface[http://httpd.apache.org/docs/2.2/programs/apachectl.html]httpd - Apache Hypertext Transfer Protocol Server[http://httpd.apache.org/docs/2.2/programs/httpd.html]

    Read the article

  • ERP in a Flash! Latest News on JD Edwards and Oracle VM Templates

    - by Kem Butller-Oracle
    Oracle Announces the Availability of Oracle VM Templates for JD Edwards EnterpriseOne 9.1 Update 2 and Tools 9.1 Update 4.4 Continuing the commitment to rapid and predictable deployments of JD Edwards EnterpriseOne, Oracle announces the general availability of Oracle VM templates for JD Edwards EnterpriseOne Application release 9.1 Update 2 and Tools release 9.1 Update 4.4. These templates can be used with Oracle VM for x86, on the Oracle Exalogic Elastic Cloud, and on the Oracle Database Machine. Oracle VM Templates for JD Edwards EnterpriseOne accelerate the process of setting up a working environment compared to the traditional installation process. The templates can be a key component to a well-managed cloud infrastructure, allowing system administrators to quickly provision fully functional JD Edwards EnterpriseOne environments for evaluation, development, or production use. The templates contain preconfigured images of the major JD Edwards EnterpriseOne server components, including: • Enterprise server • HTML server • Database server • BI Publisher (for use with One View Reporting) • Business Services Server and ADF Runtime (for use with Mobile Smartphone Applications) • Application Interface Services (new with this release, for use with Mobile Enterprise Applications) • Server Manager (new with this release) The virtual server images are built on a complete Oracle technology stack, including Oracle VM for x86, Oracle Linux, Oracle WebLogic Server, Oracle Database, and Oracle Business Intelligence Publisher. The templates can be installed into an Oracle VM for x86 system running on standard x86 servers, the Oracle Exalogic Elastic Cloud, and the Oracle Database Appliance as a composite “all-in-one” system. The database can be deployed as a fully preconfigured VM template, or it can be deployed to a preexisting database server, for example, the Oracle Exadata Database Machine or the Oracle Database Appliance. This latest set of templates includes the following applications and technology components: • JD Edwards EnterpriseOne Applications Release 9.1 Update 2 with ESUs as of April 8, 2014 • JD Edwards EnterpriseOne Tools 9.1 Update 4, maintenance pack 4 (9.1.4.4) • Oracle Database 12c (12.1.0.1) • Oracle WebLogic Server 12c (12.1.2) • Oracle Linux 5 Update 8, 64-bit • Oracle Business Intelligence Publisher 11.1.1.7.1, for use with JD Edwards EnterpriseOne One View Reporting • JD Edwards EnterpriseOne Business Services Server and Oracle Application Development Framework (ADF) 11.1.1.5, for use with the JD Edwards EnterpriseOne Mobile Applications. The delivery also includes a JD Edwards EnterpriseOne deployment server preconfigured to match the content of the templates. This edition of the templates also includes enhanced configuration utilities that greatly simplify the process of configuring the templates for deployment into a running system. The templates are immediately available for download from the Oracle Software Delivery Cloud. For more information see: • My Oracle Support article 884592.1 • Oracle Technology Network

    Read the article

  • Watching Green Day and discovering Sitecore, priceless.

    - by jonel
    I’m feeling inspired and I’d like to share a technique we’ve implemented in Sitecore to address a URL mapping from our legacy site that we wanted to carry over to the new beautiful Littelfuse.com. The challenge is to carry over all of our series URLs that have been published in our datasheets, we currently have a lot of series and having to create a manual mapping for those could be really tedious. It has the format of http://www.littelfuse.com/series/series-name.html, for instance, http://www.littelfuse.com/series/flnr.html. It would have been easier if we have our information architecture defined like this but that would have been too easy. I took a solution that is 2-fold. First, I need to create a URL rewrite rule using the IIS URL Rewrite Module 2.0. Secondly, we need to implement a handler that will take care of the actual lookup of the actual series. It will be amazing after we’ve gone over the details. Let’s start with the URL rewrite. Create a new blank rule, you can name it with anything you wish. The key part here to talk about is the Pattern and the Action groups. The Pattern is nothing but regex. Basically, I’m telling it to match the regex I have defined. In the Action group, I am telling it what to do, in this case, rewrite to the redirect.aspx webform. In this implementation, I will be using Rewrite instead of redirect so the URL sticks in the browser. If you opt to use Redirect, then the URL bar will display the new URL our webform will redirect to. Let me explain one small thing, the (\w+) in my Pattern group’s regex, will actually translate to {R:1} in my Action’s group. This is where the magic begins. Now let’s see what our Redirect.aspx contains. Remember our {R:1} above which becomes the query string variable s? This are basic .Net code. The only thing that you will probably ask is the LFSearch class. It’s our own implementation of addressing finding items by using a field search, we supply the fieldname, the value of the field, the template name of the item we are after, and the value of true or false if we want to do an exact search, or not. If eureka, then redirect to that item’s Path (Url). If not, tell the user tough luck, here’s the 404 page as a consolation. Amazing, ain’t it?

    Read the article

  • Mathematica Programming Language&ndash;An Introduction

    - by JoshReuben
    The Mathematica http://www.wolfram.com/mathematica/ programming model consists of a kernel computation engine (or grid of such engines) and a front-end of notebook instances that communicate with the kernel throughout a session. The programming model of Mathematica is incredibly rich & powerful – besides numeric calculations, it supports symbols (eg Pi, I, E) and control flow logic.   obviously I could use this as a simple calculator: 5 * 10 --> 50 but this language is much more than that!   for example, I could use control flow logic & setup a simple infinite loop: x=1; While [x>0, x=x,x+1] Different brackets have different purposes: square brackets for function arguments:  Cos[x] round brackets for grouping: (1+2)*3 curly brackets for lists: {1,2,3,4} The power of Mathematica (as opposed to say Matlab) is that it gives exact symbolic answers instead of a rounded numeric approximation (unless you request it):   Mathematica lets you define scoped variables (symbols): a=1; b=2; c=a+b --> 5 these variables can contain symbolic values – you can think of these as partially computed functions:   use Clear[x] or Remove[x] to zero or dereference a variable.   To compute a numerical approximation to n significant digits (default n=6), use N[x,n] or the //N prefix: Pi //N -->3.14159 N[Pi,50] --> 3.1415926535897932384626433832795028841971693993751 The kernel uses % to reference the lastcalculation result, %% the 2nd last, %%% the 3rd last etc –> clearer statements: eg instead of: Sqrt[Pi+Sqrt[Sqrt[Pi+Sqrt[Pi]]] do: Sqrt[Pi]; Sqrt[Pi+%]; Sqrt[Pi+%] The help system supports wildcards, so I can search for functions like so: ?Inv* Mathematica supports some very powerful programming constructs and a rich function library that allow you to do things that you would have to write allot of code for in a language like C++.   the Factor function – factorization: Factor[x^3 – 6*x^2 +11x – 6] --> (-3+x) (-2+x) (-1+x)   the Solve function – find the roots of an equation: Solve[x^3 – 2x + 1 == 0] -->   the Expand function – express (1+x)^10 in polynomial form: Expand[(1+x)^10] --> 1+10x+45x^2+120x^3+210x^4+252x^5+210x^6+120x^7+45x^8+10x^9+x^10 the Prime function – what is the 1000th prime? Prime[1000] -->7919 Mathematica also has some powerful graphics capabilities:   the Plot function – plot the graph of y=Sin x in a single period: Plot[Sin[x], {x,0,2*Pi}] you can also plot 3D surfaces of functions using Plot3D function

    Read the article

  • Backup Azure Tables, schedule Azure scripts&hellip; and more

    - by Herve Roggero
    Well – months of effort are now officially over… or should I say it’s just the beginning?   Enzo Cloud Backup 2.0 (beta) is now officially out!!! This tool will let you do the following: * Backup SQL Database (and SQL Server to a limited extend) * Backup Azure Tables * Restore SQL Backups into another SQL environment * Restore Azure Tables in Azure Storage, or SQL Environment * Manage and schedule database maintenance scripts * Drop database schema containers (with preview) for SaaS environments * Receive alerts (SMTP) when operations complete or fail That’s it at a high level… but you need to see the flexibility around these features. For example you can select a specific backup strategy for Azure Tables allowing faster backup operations when partition keys use GUIDs. You can also call custom stored procedures during the restore operation of Azure Tables, allowing you to transform the data along the way. You can also set a performance threshold during Azure Table backup operations to help you control possible throttling conditions in your Storage Account. Regarding database scripts, you can now define T-SQL scripts and schedule them for execution in a specific order. You can also tell Enzo to execute a pre and post script during Azure Table restore operations against a SQL environment. The backup operation now supports backing up to multiple devices at the same time. So you can execute a backup request to both a local file, and a blob at the same time, guaranteeing that both will contain the exact same data. And due to the level of options that are available, you can save backup definitions for later reuse. The screenshot below backs up Azure Tables to two devices (a blob and a SQL Database). You can also manage your database schemas for SaaS environments that use schema containers to separate customer data. This new edition allows you to see how many objects you have in each schema, backup specific schemas, and even drop all objects in a given schema. For example the screenshot below shows that the EnzoLog database has 4 user-defined schemas, and the AFA schema has 5 tables and 1 module (stored proc, function, view…). Selecting the AFA schema and trying to delete it will prompt another screen to show which objects will be deleted. As you can see, Enzo Cloud Backup provides amazing capabilities that can help you safeguard your data in SQL Database and Azure Tables, and give you advanced management functions for your Azure environment. Download a free trial today at http://www.bluesyntax.net.

    Read the article

  • Control Sysinternals Suite & NirSoft Utilities with a Single Interface

    - by Asian Angel
    Sysinternals and NirSoft both provide helpful utilities for your Windows system but may not be very convenient to access. Using the Windows System Control Center you can easily access everything through a single UI front end. Setup The first thing to do is set up three new folders in Program Files (or Program Files (x86) if you are using a 64bit system) with the following names (the first two need to exactly match what is shown here): Sysinternals Suite NirSoft Utilities (create this folder only if you have any of these apps downloaded) Windows System Control Center (or WSCC depending on your preferences) Unzip the contents of the Sysinternals Suite into its’ folder. Then unzip any individual NirSoft Utilities programs that you have downloaded into the NirSoft folder. All that is left to do is to unzip the WSCC software into its’ folder and create a shortcut. WSCC in Action When you start WSCC up for the first time you will see the following message with a brief explanation about the software. Next the options window will appear providing you an opportunity to look around and make any desired changes. WSCC can access utilities for both suites using a live connection if needed (utilities accessed live are not downloaded). Note: This occurs on the first run only. This is the main WSCC window…you can choose the utility that you want to use by sorting through an all items list or based on category. Note: WSCC may occasionally experience a problem downloading a particular utility if using the live service. We conducted a quick test by accessing two Sysinternals apps. First PsInfo… Followed by DiskView. Both opened quickly and were ready to go. There were no NirSoft Utilities installed on our test system in order to provide a live access example. Within moments WSCC accessed the CurrProcess utility and had it running on our system. Our recommendation is to download your favorite utilities from both suites (in order to always have easy access to them). Conclusion WSCC provides an easy way to access all of the apps in the Sysinternals Suite and NirSoft Utilities in one place. Note: A PortableApps version is also available. Links Download Windows System Control Center (WSCC) Download Windows Sysinternals Suite Download individual NirSoft Utilities programs Similar Articles Productive Geek Tips How To Get Detailed Information About Your PCAccess and Launch Windows Utilities the Easy WayWhat is svchost.exe And Why Is It Running?How to Clean Up Your Messy Windows Context MenuRemove NVIDIA Control Panel from Desktop Right-Click Menu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Ultimate Boot CD can help when disaster strikes Windows Firewall with Advanced Security – How To Guides Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox)

    Read the article

  • SQL SERVER – DVM sys.dm_os_sys_info Column Name Changed in SQL Server 2012

    - by pinaldave
    Have you ever faced situation where something does not work and when you try to go and fix it – you like fixing it and started to appreciate the breaking changes. Well, this is exactly I felt yesterday. Before I begin my story of yesterday I want to state it candidly that I do not encourage anybody to use * in the SELECT statement. One of the my DBA friend who always used my performance tuning script yesterday sent me email asking following question - “Every time I want to retrieve OS related information in SQL Server – I used DMV sys.dm_os_sys_info. I just upgraded my SQL Server edition from 2008 R2 to SQL Server 2012 RC0 and it suddenly stopped working. Well, this is not the production server so the issue is not big yet but eventually I need to resolve this error. Any suggestion?” The funny thing was original email was very long but it did not talk about what is the exact error beside the query is not working. I think this is the disadvantage of being too friendly on email sometime. Well, never the less, I quickly looked at the DMV on my SQL Server 2008 R2 and SQL Server 2012 RC0 version. To my surprise I found out that there were few columns which are renamed in SQL Server 2012 RC0. Usually when people see breaking changes they do not like it but when I see these changes I was happy as new names were meaningful and additionally their new conversion is much more practical and useful. Here are the columns previous names - Previous Column Name New Column Name physical_memory_in_bytes physical_memory_kb bpool_commit_target committed_target_kb bpool_visible visible_target_kb virtual_memory_in_bytes virtual_memory_kb bpool_commited committed_kb If you read it carefully you will notice that new columns now display few results in the KB whereas earlier result was in bytes. When I see the results in bytes I always get confused as I could not guess what exactly it will convert to. I like to see results in kb and I am glad that new columns are now displaying the results in the kb. I sent the details of the new columns to my friend and ask him to check the columns used in application. From my comment he quickly realized why he was facing error and fixed it immediately. Overall – all was well at the end and I learned something new. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL DMV, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Play a New Random Game Each Day in Chrome

    - by Asian Angel
    Being able to unwind for a few moments each day can make the time pass so much better and help you feel refreshed. If your favorite method for relaxing is playing a quick game, then join us as we take a look at the Random Games from MyGiochi.net extension for Google Chrome. Random Games from MyGiochi.net in Action The really great thing about this extension is that each day you can have a new random game to play. If you love variety this is definitely going to be a perfect match for you. We got “Power Golf” as our random game of the day. Here is a look at things once we got started…this one can be a lot of fun to play. Time to move on to the third hole now… What if you want something different from the game available on any given day? In the upper right corner you will find links for “game categories” that you can look through (clicking on the links will open a new tab). Since the links are in Italian you might need to experiment a little bit to find the category that you want to browse through. We chose the “Games for Girls Category”. With Chrome’s new built in “Translation Bar” you can easily switch the page over to the language of your choice. Note: Translation Bar available in Dev Channel releases. Ready to choose a fun game to play! You really can have a lot of fun with the games available at My Giochi. With our “game of the day” we had a second option for other games to try. More games equals more fun! Conclusion If playing online games is your favorite way to relax then the MyGiochi.net extension will make a great addition to your browser. Have fun with all of those new games each day! Links Download the Random Games from MyGiochi.net extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Geek Fun: Play Alien Arena the Free FPS GamePlay Avalanche!! in Google ChromeFriday Fun: Get Your Mario OnFriday Fun: Play Bubble QuodFriday Fun: 13 Days in Hell TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional How to Browse Privately in Firefox Kill Processes Quickly with Process Assassin Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet

    Read the article

  • Miracle Growth Of Organs From Our Own Cells

    - by Rekha
    At the current situation, there is a shortage of healthy organs. The donor and patient also have to be closely matched and there are chances for the patient’s immune system may reject the transplant. Right now, researchers are seriously involved in a new kind of solution: "bioartifical" organs are being grown from the patient’s own cells. There are a few people who have already received lab-grown bladders. Bladder technique was developed by Anthony Atala of the Wake Forest Institute for Regenerative Medicine in Winston-Salem, North Carolina. The healthy cells from the patient’s diseased bladder is taken and cause them to multiply profusely in petri dishes. The muscle cells go on the outside, urothelial cells on the inside by layering the cells one layer at a time. The bladder-to-be is then incubated at body temperature until the cells form functioning tissue. This process could take six to eight months. Organs with lots of blood vessels, such as kidneys or livers, are harder to grow than hollow ones like bladders. Atala’s group works on 22 organs and tissues including ears, recently made a functioning piece of human liver. Others in the list includes:  Columbia University – Jawbone, Yale University – Lung, University of Minnesota – Rat heart, University of Michigan – Artificial Kidney There are possibilities that growing a copy of patient’s organ is not always possible – for instance, when the original is completely damaged by cancer. By using stem cell bank collected without harming human embryos from amniotic fluid in the womb, those cells are coaxed into becoming heart, liver and other organ cells. A bank of 1,00,000 stem cell samples would have enough genetic variety to match nearly any patient. Surgeons can order organs grown as needed instead of waiting for the perfect donor. "There are few things as devastating for a surgeon as knowing you have to replace the tissue and you’re doing something that’s not ideal," says Atala, a urologic surgeon himself. "Wouldn’t it be great if they had their own organ?" Great for the patient especially, he means. Via National Geographic  and cc image credit This article titled,Miracle Growth Of Organs From Our Own Cells, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • IT Admin for Thrill Seekers

    - by Tony Davis
    A developer suggested to me recently that the life of the DBA was, surely, a dull one. My first reaction was indignation, but quickly followed by the thought that for many people excitement isn't necessarily the most desirable aspect of their job. It's true that some aspects of the DBA role seem guaranteed to quieten the pulse; in the days of tape backups, time must have slowed to eternity for the person whose job it was to oversee this process, placing tapes into secure containers, ensuring correct labeling, and.sorry, I drifted off there for a second. On the other hand, if you follow the adventures of the likes of Brent Ozar or Tom LaRock, you'd be forgiven for thinking that much of a database guy's time is spent, metaphorically, diving through plate glass windows in tight fitting underwear in order to extract grateful occupants from burning database applications. Alas it isn't true of the majority, but it isn't as dull as some people imagine, and is a helter-skelter ride compared with some other IT roles. Every IT department has people who toil away in shadowy corners doing quiet but mysterious tasks. When you ask them to explain what they do, you almost immediately want them to stop, but you hear enough to appreciate that these tasks are often absolutely vital to the smooth functioning of an IT organization. Compared with them, the DBAs are prima donnas. Here are a few nominations: Installation engineer - install all of the company's laptops and workstations, and software, deal with licensing, shipping and data entry.many organizations, especially those subject to tight regulation, would simply grind to a halt without their efforts. Localization engineer - Not quite software engineering, not quite translation, the job is to rebuild a product in a different language and make sure everything still works. QA Tester - firstly, I should say that the testers at Red Gate seem to me some of the most-fulfilled in the company. I refer here to the QA Tester whose job is more-or-less entirely to read a script, click some buttons and make sure the actual and expected values match. Configuration manager - for example, someone whose main job is to configure build environments so that devs can access their source code; assuredly necessary for the smooth functioning and productivity of the team, and hopefully well-paid. So what other sort of job in IT should one choose if the work of a DBA proves to be too exciting? Or are these roles secretly more exciting than many imagine? I invite you all to put forward your own suggestions. Cheers, Tony.

    Read the article

  • What does your Python development workbench look like?

    - by Fabian Fagerholm
    First, a scene-setter to this question: Several questions on this site have to do with selection and comparison of Python IDEs. (The top one currently is What IDE to use for Python). In the answers you can see that many Python programmers use simple text editors, many use sophisticated text editors, and many use a variety of what I would call "actual" integrated development environments – a single program in which all development is done: managing project files, interfacing with a version control system, writing code, refactoring code, making build configurations, writing and executing tests, "drawing" GUIs, and so on. Through its GUI, an IDE supports different kinds of workflows to accomplish different tasks during the journey of writing a program or making changes to an existing one. The exact features vary, but a good IDE has sensible workflows and automates things to let the programmer concentrate on the creative parts of writing software. The non-IDE way of writing large programs relies on a collection of tools that are typically single-purpose; they do "one thing well" as per the Unix philosophy. This "non-integrated development environment" can be thought of as a workbench, supported by the OS and generic interaction through a text or graphical shell. The programmer creates workflows in their mind (or in a wiki?), automates parts and builds a personal workbench, often gradually and as experience accumulates. The learning curve is often steeper than with an IDE, but those who have taken the time to do this can often claim deeper understanding of their tools. (Whether they are better programmers is not part of this question.) With advanced editor-platforms like Emacs, the pieces can be integrated into a whole, while with simpler editors like gedit or TextMate, the shell/terminal is typically the "command center" to drive the workbench. Sometimes people extend an existing IDE to suit their needs. What does your Python development workbench look like? What workflows have you developed and how do they work? For the first question, please give the main "driving" program – the one that you use to control the rest (Emacs, shell, etc.) the "small tools" -- the programs you reach for when doing different tasks For the second question, please describe what the goal of the workflow is (eg. "set up a new project" or "doing initial code design" or "adding a feature" or "executing tests") what steps are in the workflow and what commands you run for each step (eg. in the shell or in Emacs) Also, please describe the context of your work: do you write small one-off scripts, do you do web development (with what framework?), do you write data-munching applications (what kind of data and for what purpose), do you do scientific computing, desktop apps, or something else? Note: A good answer addresses the perspectives above – it doesn't just list a bunch of tools. It will typically be a long answer, not a short one, and will take some thinking to produce; maybe even observing yourself working.

    Read the article

  • C# Domain-Driven Design Sample Released

    - by Artur Trosin
    In the post I want to declare that NDDD Sample application(s) is released and share the work with you. You can access it here: http://code.google.com/p/ndddsample. NDDDSample from functionality perspective matches DDDSample 1.1.0 which is based Java and on joint effort by Eric Evans' company Domain Language and the Swedish software consulting company Citerus. But because NDDDSample is based on .NET technologies those two implementations could not be matched directly. However concepts, practices, values, patterns, especially DDD, are cross-language and cross-platform :). Implementation of .NET version of the application was an interesting journey because now as .NET developer I better understand the differences positive and negative between these two platforms. Even there are those differences they can be overtaken, in many cases it was not so hard to match a java libs\framework with .NET during the implementation. Here is a list of technology stack: 1. .net 3.5 - framework 2. VS.NET 2008 - IDE 3. ASP.NET MVC2.0 - for administration and tracking UI 4. WCF - communication mechanism 5. NHibernate - ORM 6. Rhino Commons - Nhibernate session management, base classes for in memory unit tests 7. SqlLite - database 8. Windsor - inversion of control container 9. Windsor WCF facility - for better integration with NHibernate 10. MvcContrib - and in particular its Castle WindsorControllerFactory in order to enable IoC for controllers 11. WPF - for incident logging application 12. Moq - mocking lib used for unit tests 13. NUnit - unit testing framework 14. Log4net - logging framework 15. Cloud based on Azure SDK These are not the latest technologies, tools and libs for the moment but if there are someone thinks that it would be useful to migrate the sample to latest current technologies and versions please comment. Cloud version of the application is based on Azure emulated environment provided by the SDK, so it hasn't been tested on ‘real' Azure scenario (we just do not have access to it). Thanks to participants, Eugen Gorgan who was involved directly in development, Ruslan Rusu and Victor Lungu spend their free time to discuss .NET specific decisions, Eugen Navitaniuc helped with Java related questions. Also, big thank to Cornel Cretu, he designed a nice logo and helped with some browser incompatibility issues. Any review and feedback are welcome! Thank you, Artur Trosin

    Read the article

  • How to pad number with leading zero with C#

    - by Jalpesh P. Vadgama
    Recently I was working with a project where I was in need to format a number in such a way which can apply leading zero for particular format.  So after doing such R and D I have found a great way to apply this leading zero format. I was having need that I need to pad number in 5 digit format. So following is a table in which format I need my leading zero format. 1-> 00001 20->00020 300->00300 4000->04000 50000->5000 So in the above example you can see that 1 will become 00001 and 20 will become 00200 format so on. So to display an integer value in decimal format I have applied interger.Tostring(String) method where I have passed “Dn” as the value of the format parameter, where n represents the minimum length of the string. So if we pass 5 it will have padding up to 5 digits. So let’s create a simple console application and see how its works. Following is a code for that. using System; namespace LeadingZero { class Program { static void Main(string[] args) { int a = 1; int b = 20; int c = 300; int d = 4000; int e = 50000; Console.WriteLine(string.Format("{0}------>{1}",a,a.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", b, b.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", c, c.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", d, d.ToString("D5"))); Console.WriteLine(string.Format("{0}------>{1}", e, e.ToString("D5"))); Console.ReadKey(); } } } As you can see in the above code I have use string.Format function to display value of integer and after using integer value’s  ToString method. Now Let’s run the console application and following is the output as expected. Here you can see the integer number are converted into the exact output that we requires. That’s it you can see it’s very easy. We have written code in nice clean way and without writing any extra code or loop. Hope you liked it. Stay tuned for more.. Till than happy programming.

    Read the article

  • Taking the fear out of a Cloud initiative through the use of security tools

    - by user736511
    Typical employees, constituents, and business owners  interact with online services at a level where their knowledge of back-end systems is low, and most of the times, there is no interest in knowing the systems' architecture.  Most application administrators, while partially responsible for these systems' upkeep, have very low interactions with them, at least at an operational, platform level.  Of greatest interest to these groups is the consistent, reliable, and manageable operation of the interfaces with which they communicate.  Introducing the "Cloud" topic in any evolving architecture automatically raises the concerns for data and identity security simply because of the perception that when owning the silicon, enterprises are not able to manage its content.  But is this really true?   In the majority of traditional architectures, data and applications that access it are physically distant from the organization that owns it.  It may reside in a shared data center, or a geographically convenient location that spans large organizations' connectivity capabilities.  In the end, very often, the model of a "traditional" architecture is fairly close to the "new" Cloud architecture.  Most notable difference is that by nature, a Cloud setup uses security as a core function, and not as a necessary add-on. Therefore, following best practices, one can say that data can be safer in the Cloud than in traditional, stove-piped environments where data access is segmented and difficult to audit. The caveat is, of course, what "best practices" consist of, and here is where Oracle's security tools are perfectly suited for the task.  Since Oracle's model is to support very large organizations, it is fundamentally concerned about distributed applications, databases etc and their security, and the related Identity Management Products, or DB Security options reflect that concept.  In the end, consumers of applications and their data are to be served more safely in a controlled Cloud environment, while realizing the many cost savings associated with it. Having very fast resources to serve them (such as the Exa* platform) makes the concept even more attractive.  Finally, if a Cloud strategy does not seem feasible, consider the pros and cons of a traditional vs. a Cloud architecture.  Using the exact same criteria and business goals/traditions, and with Oracle's technology, you might be hard pressed to justify maintaining the technical status quo on security alone. For additional information please visit Oracle's Cloud Security page at: http://www.oracle.com/us/technologies/cloud/cloud-security-428855.html

    Read the article

  • Silverlight Cream for February 05, 2011 -- #1041

    - by Dave Campbell
    In this Issue: Peter Kuhn, Mike Ormond(-2-, -3-), WindowsPhoneGeek, Daniel N. Egan, Phil Middlemiss(-2-), Max Paulousky, Michael Washington. Above the Fold: Silverlight: "Designing for Browser-Zoom: Part 2" Phil Middlemiss WP7: "Talking about Converters in WP7 | Coding4fun toolkit converters in depth" WindowsPhoneGeek Lightswitch: "LightSwitch: Can We Handle The Truth?" Michael Washington Shoutouts: András Velvárt has a video up of some awesome changes he has planned for SurfCube, check it out: SurfCube V2 - 3D Web Browser for Windows Phone 7, now with tabs! From SilverlightCream.com: Silverlight for keyboard junkies Peter Kuhn has a post up talking about the issues surrounding trying to use the tab key to navigate between controls... and follows it up with a behavior that resolves it. Windows Phone 7 Content On Demand Mike Ormond has a batch of WP7 Videos up... this first is "Windows Phone 7: A Different Kind of Phone" with Andrej Radinger. Windows Phone 7 Content on Demand Pt 2 Mike Ormond's 2nd WP7 video is "Understanding the Windows Phone 7 Development Tools and Getting Started" with Maarten Struys Windows Phone 7 Content on Demand Pt 3 Mike Ormond's 3rd WP7 Content on Demand is "Games Programming on Windows Phone 7 with Silverlight and XNA" with Rob Miles Talking about Converters in WP7 | Coding4fun toolkit converters in depth WindowsPhoneGeek is discussing value converters in his latest post... value converters for WP7... and the ones in the Coding4Fun toolkit to be exact... everything you wanted to know about them but didn't know to ask :) WP7 Developer Tools–Jan Update Daniel N. Egan has information up about the new WP7 Developer Tools release. Designing for Browser-Zoom: Part 1 Phil Middlemiss has both parts of a series on Browser Zoom up... this first part covers the zoom and different pieces involved. Designing for Browser-Zoom: Part 2 Phil Middlemiss's part 2 shows us some design considerations and visual states, including an attached behavior you can use in Blend to respond to the zoom event. Windows Phone Copy-Paste: How It Looks and Works Max Paulousky has the first post I've seen on WP7 Copy/Paste up... of course it's still in the emulator, but hey... that's better than nothing, right? LightSwitch: Can We Handle The Truth? Have you been playing with Lightswitch? Well... Michael Washington has, and it's got his interest up far enough that he's waving the flags trying to attract everyone else over there as well... see if you agree. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • PTLQueue : a scalable bounded-capacity MPMC queue

    - by Dave
    Title: Fast concurrent MPMC queue -- I've used the following concurrent queue algorithm enough that it warrants a blog entry. I'll sketch out the design of a fast and scalable multiple-producer multiple-consumer (MPSC) concurrent queue called PTLQueue. The queue has bounded capacity and is implemented via a circular array. Bounded capacity can be a useful property if there's a mismatch between producer rates and consumer rates where an unbounded queue might otherwise result in excessive memory consumption by virtue of the container nodes that -- in some queue implementations -- are used to hold values. A bounded-capacity queue can provide flow control between components. Beware, however, that bounded collections can also result in resource deadlock if abused. The put() and take() operators are partial and wait for the collection to become non-full or non-empty, respectively. Put() and take() do not allocate memory, and are not vulnerable to the ABA pathologies. The PTLQueue algorithm can be implemented equally well in C/C++ and Java. Partial operators are often more convenient than total methods. In many use cases if the preconditions aren't met, there's nothing else useful the thread can do, so it may as well wait via a partial method. An exception is in the case of work-stealing queues where a thief might scan a set of queues from which it could potentially steal. Total methods return ASAP with a success-failure indication. (It's tempting to describe a queue or API as blocking or non-blocking instead of partial or total, but non-blocking is already an overloaded concurrency term. Perhaps waiting/non-waiting or patient/impatient might be better terms). It's also trivial to construct partial operators by busy-waiting via total operators, but such constructs may be less efficient than an operator explicitly and intentionally designed to wait. A PTLQueue instance contains an array of slots, where each slot has volatile Turn and MailBox fields. The array has power-of-two length allowing mod/div operations to be replaced by masking. We assume sensible padding and alignment to reduce the impact of false sharing. (On x86 I recommend 128-byte alignment and padding because of the adjacent-sector prefetch facility). Each queue also has PutCursor and TakeCursor cursor variables, each of which should be sequestered as the sole occupant of a cache line or sector. You can opt to use 64-bit integers if concerned about wrap-around aliasing in the cursor variables. Put(null) is considered illegal, but the caller or implementation can easily check for and convert null to a distinguished non-null proxy value if null happens to be a value you'd like to pass. Take() will accordingly convert the proxy value back to null. An advantage of PTLQueue is that you can use atomic fetch-and-increment for the partial methods. We initialize each slot at index I with (Turn=I, MailBox=null). Both cursors are initially 0. All shared variables are considered "volatile" and atomics such as CAS and AtomicFetchAndIncrement are presumed to have bidirectional fence semantics. Finally T is the templated type. I've sketched out a total tryTake() method below that allows the caller to poll the queue. tryPut() has an analogous construction. Zebra stripping : alternating row colors for nice-looking code listings. See also google code "prettify" : https://code.google.com/p/google-code-prettify/ Prettify is a javascript module that yields the HTML/CSS/JS equivalent of pretty-print. -- pre:nth-child(odd) { background-color:#ff0000; } pre:nth-child(even) { background-color:#0000ff; } border-left: 11px solid #ccc; margin: 1.7em 0 1.7em 0.3em; background-color:#BFB; font-size:12px; line-height:65%; " // PTLQueue : Put(v) : // producer : partial method - waits as necessary assert v != null assert Mask = 1 && (Mask & (Mask+1)) == 0 // Document invariants // doorway step // Obtain a sequence number -- ticket // As a practical concern the ticket value is temporally unique // The ticket also identifies and selects a slot auto tkt = AtomicFetchIncrement (&PutCursor, 1) slot * s = &Slots[tkt & Mask] // waiting phase : // wait for slot's generation to match the tkt value assigned to this put() invocation. // The "generation" is implicitly encoded as the upper bits in the cursor // above those used to specify the index : tkt div (Mask+1) // The generation serves as an epoch number to identify a cohort of threads // accessing disjoint slots while s-Turn != tkt : Pause assert s-MailBox == null s-MailBox = v // deposit and pass message Take() : // consumer : partial method - waits as necessary auto tkt = AtomicFetchIncrement (&TakeCursor,1) slot * s = &Slots[tkt & Mask] // 2-stage waiting : // First wait for turn for our generation // Acquire exclusive "take" access to slot's MailBox field // Then wait for the slot to become occupied while s-Turn != tkt : Pause // Concurrency in this section of code is now reduced to just 1 producer thread // vs 1 consumer thread. // For a given queue and slot, there will be most one Take() operation running // in this section. // Consumer waits for producer to arrive and make slot non-empty // Extract message; clear mailbox; advance Turn indicator // We have an obvious happens-before relation : // Put(m) happens-before corresponding Take() that returns that same "m" for T v = s-MailBox if v != null : s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 // unlock slot to admit next producer and consumer return v Pause tryTake() : // total method - returns ASAP with failure indication for auto tkt = TakeCursor slot * s = &Slots[tkt & Mask] if s-Turn != tkt : return null T v = s-MailBox // presumptive return value if v == null : return null // ratify tkt and v values and commit by advancing cursor if CAS (&TakeCursor, tkt, tkt+1) != tkt : continue s-MailBox = null ST-ST barrier s-Turn = tkt + Mask + 1 return v The basic idea derives from the Partitioned Ticket Lock "PTL" (US20120240126-A1) and the MultiLane Concurrent Bag (US8689237). The latter is essentially a circular ring-buffer where the elements themselves are queues or concurrent collections. You can think of the PTLQueue as a partitioned ticket lock "PTL" augmented to pass values from lock to unlock via the slots. Alternatively, you could conceptualize of PTLQueue as a degenerate MultiLane bag where each slot or "lane" consists of a simple single-word MailBox instead of a general queue. Each lane in PTLQueue also has a private Turn field which acts like the Turn (Grant) variables found in PTL. Turn enforces strict FIFO ordering and restricts concurrency on the slot mailbox field to at most one simultaneous put() and take() operation. PTL uses a single "ticket" variable and per-slot Turn (grant) fields while MultiLane has distinct PutCursor and TakeCursor cursors and abstract per-slot sub-queues. Both PTL and MultiLane advance their cursor and ticket variables with atomic fetch-and-increment. PTLQueue borrows from both PTL and MultiLane and has distinct put and take cursors and per-slot Turn fields. Instead of a per-slot queues, PTLQueue uses a simple single-word MailBox field. PutCursor and TakeCursor act like a pair of ticket locks, conferring "put" and "take" access to a given slot. PutCursor, for instance, assigns an incoming put() request to a slot and serves as a PTL "Ticket" to acquire "put" permission to that slot's MailBox field. To better explain the operation of PTLQueue we deconstruct the operation of put() and take() as follows. Put() first increments PutCursor obtaining a new unique ticket. That ticket value also identifies a slot. Put() next waits for that slot's Turn field to match that ticket value. This is tantamount to using a PTL to acquire "put" permission on the slot's MailBox field. Finally, having obtained exclusive "put" permission on the slot, put() stores the message value into the slot's MailBox. Take() similarly advances TakeCursor, identifying a slot, and then acquires and secures "take" permission on a slot by waiting for Turn. Take() then waits for the slot's MailBox to become non-empty, extracts the message, and clears MailBox. Finally, take() advances the slot's Turn field, which releases both "put" and "take" access to the slot's MailBox. Note the asymmetry : put() acquires "put" access to the slot, but take() releases that lock. At any given time, for a given slot in a PTLQueue, at most one thread has "put" access and at most one thread has "take" access. This restricts concurrency from general MPMC to 1-vs-1. We have 2 ticket locks -- one for put() and one for take() -- each with its own "ticket" variable in the form of the corresponding cursor, but they share a single "Grant" egress variable in the form of the slot's Turn variable. Advancing the PutCursor, for instance, serves two purposes. First, we obtain a unique ticket which identifies a slot. Second, incrementing the cursor is the doorway protocol step to acquire the per-slot mutual exclusion "put" lock. The cursors and operations to increment those cursors serve double-duty : slot-selection and ticket assignment for locking the slot's MailBox field. At any given time a slot MailBox field can be in one of the following states: empty with no pending operations -- neutral state; empty with one or more waiting take() operations pending -- deficit; occupied with no pending operations; occupied with one or more waiting put() operations -- surplus; empty with a pending put() or pending put() and take() operations -- transitional; or occupied with a pending take() or pending put() and take() operations -- transitional. The partial put() and take() operators can be implemented with an atomic fetch-and-increment operation, which may confer a performance advantage over a CAS-based loop. In addition we have independent PutCursor and TakeCursor cursors. Critically, a put() operation modifies PutCursor but does not access the TakeCursor and a take() operation modifies the TakeCursor cursor but does not access the PutCursor. This acts to reduce coherence traffic relative to some other queue designs. It's worth noting that slow threads or obstruction in one slot (or "lane") does not impede or obstruct operations in other slots -- this gives us some degree of obstruction isolation. PTLQueue is not lock-free, however. The implementation above is expressed with polite busy-waiting (Pause) but it's trivial to implement per-slot parking and unparking to deschedule waiting threads. It's also easy to convert the queue to a more general deque by replacing the PutCursor and TakeCursor cursors with Left/Front and Right/Back cursors that can move either direction. Specifically, to push and pop from the "left" side of the deque we would decrement and increment the Left cursor, respectively, and to push and pop from the "right" side of the deque we would increment and decrement the Right cursor, respectively. We used a variation of PTLQueue for message passing in our recent OPODIS 2013 paper. ul { list-style:none; padding-left:0; padding:0; margin:0; margin-left:0; } ul#myTagID { padding: 0px; margin: 0px; list-style:none; margin-left:0;} -- -- There's quite a bit of related literature in this area. I'll call out a few relevant references: Wilson's NYU Courant Institute UltraComputer dissertation from 1988 is classic and the canonical starting point : Operating System Data Structures for Shared-Memory MIMD Machines with Fetch-and-Add. Regarding provenance and priority, I think PTLQueue or queues effectively equivalent to PTLQueue have been independently rediscovered a number of times. See CB-Queue and BNPBV, below, for instance. But Wilson's dissertation anticipates the basic idea and seems to predate all the others. Gottlieb et al : Basic Techniques for the Efficient Coordination of Very Large Numbers of Cooperating Sequential Processors Orozco et al : CB-Queue in Toward high-throughput algorithms on many-core architectures which appeared in TACO 2012. Meneghin et al : BNPVB family in Performance evaluation of inter-thread communication mechanisms on multicore/multithreaded architecture Dmitry Vyukov : bounded MPMC queue (highly recommended) Alex Otenko : US8607249 (highly related). John Mellor-Crummey : Concurrent queues: Practical fetch-and-phi algorithms. Technical Report 229, Department of Computer Science, University of Rochester Thomasson : FIFO Distributed Bakery Algorithm (very similar to PTLQueue). Scott and Scherer : Dual Data Structures I'll propose an optimization left as an exercise for the reader. Say we wanted to reduce memory usage by eliminating inter-slot padding. Such padding is usually "dark" memory and otherwise unused and wasted. But eliminating the padding leaves us at risk of increased false sharing. Furthermore lets say it was usually the case that the PutCursor and TakeCursor were numerically close to each other. (That's true in some use cases). We might still reduce false sharing by incrementing the cursors by some value other than 1 that is not trivially small and is coprime with the number of slots. Alternatively, we might increment the cursor by one and mask as usual, resulting in a logical index. We then use that logical index value to index into a permutation table, yielding an effective index for use in the slot array. The permutation table would be constructed so that nearby logical indices would map to more distant effective indices. (Open question: what should that permutation look like? Possibly some perversion of a Gray code or De Bruijn sequence might be suitable). As an aside, say we need to busy-wait for some condition as follows : "while C == 0 : Pause". Lets say that C is usually non-zero, so we typically don't wait. But when C happens to be 0 we'll have to spin for some period, possibly brief. We can arrange for the code to be more machine-friendly with respect to the branch predictors by transforming the loop into : "if C == 0 : for { Pause; if C != 0 : break; }". Critically, we want to restructure the loop so there's one branch that controls entry and another that controls loop exit. A concern is that your compiler or JIT might be clever enough to transform this back to "while C == 0 : Pause". You can sometimes avoid this by inserting a call to a some type of very cheap "opaque" method that the compiler can't elide or reorder. On Solaris, for instance, you could use :"if C == 0 : { gethrtime(); for { Pause; if C != 0 : break; }}". It's worth noting the obvious duality between locks and queues. If you have strict FIFO lock implementation with local spinning and succession by direct handoff such as MCS or CLH,then you can usually transform that lock into a queue. Hidden commentary and annotations - invisible : * And of course there's a well-known duality between queues and locks, but I'll leave that topic for another blog post. * Compare and contrast : PTLQ vs PTL and MultiLane * Equivalent : Turn; seq; sequence; pos; position; ticket * Put = Lock; Deposit Take = identify and reserve slot; wait; extract & clear; unlock * conceptualize : Distinct PutLock and TakeLock implemented as ticket lock or PTL Distinct arrival cursors but share per-slot "Turn" variable provides exclusive role-based access to slot's mailbox field put() acquires exclusive access to a slot for purposes of "deposit" assigns slot round-robin and then acquires deposit access rights/perms to that slot take() acquires exclusive access to slot for purposes of "withdrawal" assigns slot round-robin and then acquires withdrawal access rights/perms to that slot At any given time, only one thread can have withdrawal access to a slot at any given time, only one thread can have deposit access to a slot Permissible for T1 to have deposit access and T2 to simultaneously have withdrawal access * round-robin for the purposes of; role-based; access mode; access role mailslot; mailbox; allocate/assign/identify slot rights; permission; license; access permission; * PTL/Ticket hybrid Asymmetric usage ; owner oblivious lock-unlock pairing K-exclusion add Grant cursor pass message m from lock to unlock via Slots[] array Cursor performs 2 functions : + PTL ticket + Assigns request to slot in round-robin fashion Deconstruct protocol : explication put() : allocate slot in round-robin fashion acquire PTL for "put" access store message into slot associated with PTL index take() : Acquire PTL for "take" access // doorway step seq = fetchAdd (&Grant, 1) s = &Slots[seq & Mask] // waiting phase while s-Turn != seq : pause Extract : wait for s-mailbox to be full v = s-mailbox s-mailbox = null Release PTL for both "put" and "take" access s-Turn = seq + Mask + 1 * Slot round-robin assignment and lock "doorway" protocol leverage the same cursor and FetchAdd operation on that cursor FetchAdd (&Cursor,1) + round-robin slot assignment and dispersal + PTL/ticket lock "doorway" step waiting phase is via "Turn" field in slot * PTLQueue uses 2 cursors -- put and take. Acquire "put" access to slot via PTL-like lock Acquire "take" access to slot via PTL-like lock 2 locks : put and take -- at most one thread can access slot's mailbox Both locks use same "turn" field Like multilane : 2 cursors : put and take slot is simple 1-capacity mailbox instead of queue Borrow per-slot turn/grant from PTL Provides strict FIFO Lock slot : put-vs-put take-vs-take at most one put accesses slot at any one time at most one put accesses take at any one time reduction to 1-vs-1 instead of N-vs-M concurrency Per slot locks for put/take Release put/take by advancing turn * is instrumental in ... * P-V Semaphore vs lock vs K-exclusion * See also : FastQueues-excerpt.java dice-etc/queue-mpmc-bounded-blocking-circular-xadd/ * PTLQueue is the same as PTLQB - identical * Expedient return; ASAP; prompt; immediately * Lamport's Bakery algorithm : doorway step then waiting phase Threads arriving at doorway obtain a unique ticket number Threads enter in ticket order * In the terminology of Reed and Kanodia a ticket lock corresponds to the busy-wait implementation of a semaphore using an eventcount and a sequencer It can also be thought of as an optimization of Lamport's bakery lock was designed for fault-tolerance rather than performance Instead of spinning on the release counter, processors using a bakery lock repeatedly examine the tickets of their peers --

    Read the article

  • debootstrap or virt-install Ubuntu Server Maverick fails

    - by poelinca
    Oki so running any kind of variation of debootsrap i get the following error I: Extracting zlib1g... W: Failure trying to run: chroot /lxc/iso/dodo mount -t proc proc /proc debootstrap.log : mount: permission denied if i manualy chroot into the directory then i get promted with: id: cannot find name for group ID 0 I have no name!@...# i tryed addgroup but it's not installed , apt-get/aptitude : command not found , so i can't do anything with it . I've tryed ubuntu-vm-builder but since it's calling debootstrap i get the same error . Played with it for a few days and then i stoped and gaved virt-install a try , everithing works till i get to the console to finish the install witch shows only : Escape character is ^] and nothing more , no matter what i type . So basicly what i'm trying to do is build a usable chroot system so i can use it with lxc or libvirt . What are my options to get containers/virtualisation up and running ? I've read somewhere that i can use openvz templates with lxc or libvirt ? but how ? Let me know if you need aditional info ( p.s. doing all this on a dedicated server so i can't access it by hand , only ssh , plus on my local pc running ubuntu desktop maverick everithing works ) . EDIT Getting closer , i managed to understand how to use an openvz template with lxc , now the problem comes with the network bridge lxc-start: invalid interface name: br0 # Use same bridge device used in your controlling host setup lxc-start: failed to process 'lxc.network.link = br0 # Use same bridge device used in your controlling host setup ' lxc-start: failed to read configuration file i followed the exact steps to create a bridge and lxc conf looks like: lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 # Use same bridge device used in your controlling host setup lxc.network.hwaddr = {a1:b2:c3:d4:e5:f6} # As appropiate (line only needed if you wish to dhcp later) lxc.network.ipv4 = {10.0.0.100} # (Use 0.0.0.0 if you wish to dhcp later) lxc.network.name = eth0 # could likely be whatever you want Since it's not working i know smth is wrong so could somebody guyde me ? EDIT , looks like the base install was using an custom kernel ( bzImage-2.6.34.6-xxxx-grs-ipv6-65 ) for witch you i didn't found the headers , i did a update-grub after i installed a new kernel , edited menu.lst and no it's using 2.6.35-23-server and now debootstrap is working just fine same as ubuntu-vm-builder .

    Read the article

  • Oracle WebCenter: Social Networking & Collaboration

    - by kellsey.ruppel(at)oracle.com
    We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how the new release of Oracle WebCenter provides unprecedented Social Networking and Collaboration.We recently talked with Carin Chan, Principal Product Manager at Oracle, around the topic of Social Networking and Collaboration. In today’s work environment, employees have come to expect social and collaborative services to augment their work environment. Whether it is to post a blog or to poll fellow coworkers, employees expect and demand access to highly integrated, collaborative work environments that allow them to quickly contribute at work -- whether it is to make informed decisions, contribute on projects, or share knowledge.Social and collaborative services from Oracle WebCenter add an immeasurable amount of value to achieving a modern user experience. Oracle WebCenter Services provides rich and comprehensive social computing services that include services such as wikis, blogs, instant messaging, presence, activity streams and graphs, and polls/surveys that offer employees access to rich collaborative services to work efficiently.Employees can create pages or spaces that mix and match collaborative services while bringing in data from other applications to share with groups, teams, or organizations. These out of the box social and collaborative services include: People Connections and Activity Streams enable users to quickly assemble and visualize their social business networks and track user activities.Activity Graphs tracks all user activities in real-time and gathers intelligence about these users, their connections and the way they use information to make educated recommendations and provide on the spot information discovery.Wikis and blogs enable the community authoring of documents and sharing of ideas and also allow for the gathering of feedback and comments on those ideas.Tags and links allow users to easily mark, connect and share information with others.RSS feeds are available to track new or changed information related to discussion forums, processes or activities in an Oracle WebCenter environment.Discussion forums enable sharing of group knowledge and easy creation of communities around specific topics.Announcements allow you to manage and publish important news to your user base.Instant Messaging and Presence enable real-time awareness and communication with available users in the context of a business task.Web and Voice Conferencing enables real-time communication with internal and external business users.Lists provide a way to manage list data directly on the web as well as export and import it from and to Microsoft Excel.Oracle WebCenter Analytics provides comprehensive reporting metrics on activity and content usage within portals or composite applications.Activity Streams allow you to track activities and visualize your business networks.While being able to integrate into your portal deployment, these services are also integrated into how users are already working. This includes integration with software such as Microsoft Outlook, Microsoft Office and mobile devices such as the Apple iPhone. These services are just a tip of the iceberg regarding social and collaborative services that Oracle WebCenter has to offer your employees. Be sure to keep checking back this week for in future posts, we’ll delve deeper into a few of these collaborative services and discuss how a combination of collaborative services offer a better portal deployment to empower business users. Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, social, activity streams, blogs, wikis

    Read the article

< Previous Page | 267 268 269 270 271 272 273 274 275 276 277 278  | Next Page >