Search Results

Search found 11675 results on 467 pages for 'parallel testing'.

Page 152/467 | < Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >

  • When to use each user research method

    - by user12277104
    There are a lot of user research methods out there, but sometimes we get stuck in a rut, conducting all formative usability testing before coding, or running surveys to gather satisfaction data. I'll be the first to admit that it happens to me, but to get out of a rut, it just takes a minute to look at where I am in the design & development cycle, what kind(s) of data I need, and what methods are available to me. We need reminders, or refreshers, every once in a while. One tool I've found useful is a graphic organizer that I created many years ago. It's been through several revisions, as I've adapted it to the product cycles of the places I've worked, changed my mind about how to categorize it, and added methods that I've used or created over time. I shared a version of this table at the 2012 International UPA conference, and I was contacted by someone yesterday who wanted to use it in a university course on user-center design. I was flattered at the the thought, but embarrassed, because I was sure it needed updating -- that was a year ago, after all. But I opened it today, and really, there's not much I'd change -- sure, I could add some nuance regarding what types of formative testing, such as modality (remote, unmoderated remote, or in-person) or flavor of testing (RITE, RITE-Krug, comparative, performance), but I think it's pretty much ok as is. Click on the image below, to get the full-size PDF. And whether it's entirely "right" or "wrong" isn't the whole value of looking at these methods across the product lifecycle. The real value lies in the reminder that I have options. And what those options are change as the field changes, so while I don't expect this graphic to have an eternal shelf life, it's still ok a year after I last updated it. That said, if you find something missing or out of place, let me know :) 

    Read the article

  • How can I render a semi transparent model with OpenGL correctly?

    - by phobitor
    I'm using OpenGL ES 2 and I want to render a simple model with some level of transparency. I'm just starting out with shaders, and I wrote a simple diffuse shader for the model without any issues but I don't know how to add transparency to it. I tried to set my fragment shader's output (gl_FragColor) to a non opaque alpha value but the results weren't too great. It sort of works, but it looks like certain model triangles are only rendered based on the camera position... It's really hard to describe what's wrong so please watch this short video I recorded: http://www.youtube.com/watch?v=s0JqA0rZabE I thought this was a depth testing issue so I tried playing around with enabling/disabling depth testing and back face culling. Enabling back face culling changes the output slightly but the problem in the video is still there. Enabling/disabling depth testing doesn't seem to do anything. Could anyone explain what I'm seeing and how I can add some simple transparency to my model with the shader? I'm not looking for advanced order independent transparency implementations. edit: Vertex Shader: // color varying for fragment shader varying mediump vec3 LightIntensity; varying highp vec3 VertexInModelSpace; void main() { // vec4 LightPosition = vec4(0.0, 0.0, 0.0, 1.0); vec3 LightColor = vec3(1.0, 1.0, 1.0); vec3 DiffuseColor = vec3(1.0, 0.25, 0.0); // find the vector from the given vertex to the light source vec4 vertexInWorldSpace = gl_ModelViewMatrix * vec4(gl_Vertex); vec3 normalInWorldSpace = normalize(gl_NormalMatrix * gl_Normal); vec3 lightDirn = normalize(vec3(LightPosition-vertexInWorldSpace)); // save vertexInWorldSpace VertexInModelSpace = vec3(gl_Vertex); // calculate light intensity LightIntensity = LightColor * DiffuseColor * max(dot(lightDirn,normalInWorldSpace),0.0); // calculate projected vertex position gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: // varying to define color varying vec3 LightIntensity; varying vec3 VertexInModelSpace; void main() { gl_FragColor = vec4(LightIntensity,0.5); }

    Read the article

  • State Changes in a Component Based Architecture [closed]

    - by Maxem
    I'm currently working on a game and using the naive component based architecture thingie (Entities are a bag of components, entity.Update() calls Update on each updateable component), while the addition of new features is really simple, it makes a few things really difficult: a) multithreading / currency b) networking c) unit testing. Multithreading / Concurrency is difficult because I basically have to do poor mans concurrency (running the entity updates in separate threads while locking only stuff that crashes (like lists) and ignoring the staleness of read state (some states are already updated, others aren't)) Networking: There are no explicit state changes that I could efficiently push over the net. Unit testing: All updates may or may not conflict, so automated testing is at least awkward. I was thinking about these issues a bit and would like your input on these changes / idea: Switch from the naive cba to a cba with sub systems that work on lists of components Make all state changes explicit Combine 1 and 2 :p Example world update: statePostProcessing.Wait() // ensure that post processing has finished Apply(postProcessedState) state = new StateBag() Concurrently( () => LifeCycleSubSystem.Update(state), // populates the state bag () => MovementSubSystem.Update(state), // populates the state bag .... }) statePostProcessing = Future(() => PostProcess(state)) statePostProcessing.Start() // Tick is finished, the post processing happens in the background So basically the changes are (consistently) based on the data for the last tick; the post processing can a) generate network packages and b) fix conflicts / remove useless changes (example: entity has been destroyed - ignore movement etc.). EDIT: To clarify the granularity of the state changes: If I save these post processed state bags and apply them to an empty world, I see exactly what has happened in the game these state bags originated from - "Free" replay capability. EDIT2: I guess I should have used the term Event instead of State Change and point out that I kind of want to use the Event Sourcing pattern

    Read the article

  • Git Branch Model for iOS projects with one developer

    - by glenwayguy
    I'm using git for an iOS project, and so far have the following branch model: feature_brach(usually multiple) -> development -> testing -> master Feature-branches are short-lived, just used to add a feature or bug, then merged back in to development and deleted. Development is fairly stable, but not ready for production. Testing is when we have a stable version with enough features for a new update, and we ship to beta testers. Once testing is finished, it can be moved back into development or advanced into master. The problem, however, lies in the fact that we can't instantly deploy. On iOS, it can be several weeks between the time a build is released and when it actually hits users. I always want to have a version of the code that is currently on the market in my repo, but I also have to have a place to keep the current stable code to be sent for release. So: where should I keep stable code where should I keep the code currently on the market and where should I keep the code that is in review with Apple, and will be (hopefully) put on the market soon? Also, this is a one developer team, so collaboration is not totally necessary, but preferred because there may be more members in the future.

    Read the article

  • How do you unit test a unit test?

    - by FlySwat
    I was watching Rob Connerys webcasts on the MVCStoreFront App, and I noticed he was unit testing even the most mundane things, things like: public Decimal DiscountPrice { get { return this.Price - this.Discount; } } Would have a test like: [TestMethod] public void Test_DiscountPrice { Product p = new Product(); p.Price = 100; p.Discount = 20; Assert.IsEqual(p.DiscountPrice,80); } While, I am all for unit testing, I sometimes wonder if this form of test first development is really beneficial, for example, in a real process, you have 3-4 layers above your code (Business Request, Requirements Document, Architecture Document), where the actual defined business rule (Discount Price is Price - Discount) could be misdefined. If that's the situation, your unit test means nothing to you. Additionally, your unit test is another point of failure: [TestMethod] public void Test_DiscountPrice { Product p = new Product(); p.Price = 100; p.Discount = 20; Assert.IsEqual(p.DiscountPrice,90); } Now the test is flawed. Obviously in a simple test, it's no big deal, but say we were testing a complicated business rule. What do we gain here? Fast forward two years into the application's life, when maintenance developers are maintaining it. Now the business changes its rule, and the test breaks again, some rookie developer then fixes the test incorrectly...we now have another point of failure. All I see is more possible points of failure, with no real beneficial return, if the discount price is wrong, the test team will still find the issue, how did unit testing save any work? What am I missing here? Please teach me to love TDD, as I'm having a hard time accepting it as useful so far. I want too, because I want to stay progressive, but it just doesn't make sense to me. EDIT: A couple people keep mentioned that testing helps enforce the spec. It has been my experience that the spec has been wrong as well, more often than not, but maybe I'm doomed to work in an organization where the specs are written by people who shouldn't be writing specs.

    Read the article

  • log4bash: Cannot find a way to add MaxBackupIndex to this logger implementation

    - by Syffys
    I have been trying to modify this log4bash implementation but I cannot manage to make it work. Here's a sample: #!/bin/bash TRUE=1 FALSE=0 ############### Added for testing log4bash_LOG_ENABLED=$TRUE log4bash_rootLogger=$TRACE,f,s log4bash_appender_f=file log4bash_appender_f_dir=$(pwd) log4bash_appender_f_file=test.log log4bash_appender_f_roll_format=%Y%m log4bash_appender_f_roll=$TRUE log4bash_appender_f_maxBackupIndex=10 #################################### log4bash_abs(){ if [ "${1:0:1}" == "." ]; then builtin echo ${rootDir}/${1} else builtin echo ${1} fi } log4bash_check_app_dir(){ if [ "$log4bash_LOG_ENABLED" -eq $TRUE ]; then dir=$(log4bash_abs $1) if [ ! -d ${dir} ]; then #log a seperation line mkdir $dir fi fi } # Delete old log files # $1 Log directory # $2 Log filename # $3 Log filename suffix # $4 Max backup index log4bash_delete_old_files(){ ##### Added for testing builtin echo "Running log4bash_delete_old_files $@" &2 ##### if [ "$log4bash_LOG_ENABLED" -eq $TRUE ] && [ -n "$3" ] && [ "$4" -gt 0 ]; then local directory=$(log4bash_abs $1) local filename=$2 local maxBackupIndex=$4 local suffix=$(echo "${3}" | sed -re 's/[^.]/?/g') local logFileList=$(find "${directory}" -mindepth 1 -maxdepth 1 -name "${filename}${suffix}" -type f | xargs ls -1rt) local fileCnt=$(builtin echo -e "${logFileList}" | wc -l) local fileToDeleteCnt=$(($fileCnt-$maxBackupIndex)) local fileToDelete=($(builtin echo -e "${logFileList}" | head -n "${fileToDeleteCnt}" | sed ':a;N;$!ba;s/\n/ /g')) ##### Added for testing builtin echo "log4bash_delete_old_files About to start deletion ${fileToDelete[@]}" &2 ##### if [ ${fileToDeleteCnt} -gt 0 ]; then for f in "${fileToDelete[@]}"; do #### Added for testing builtin echo "Removing file ${f}" &2 #### builtin eval rm -f ${f} done fi fi } #Appender # $1 Log directory # $2 Log file # $3 Log file roll ? # $4 Appender Name log4bash_filename(){ builtin echo "Running log4bash_filename $@" &2 local format local filename log4bash_check_app_dir "${1}" if [ ${3} -eq 1 ];then local formatProp=${4}_roll_format format=${!formatProp} if [ -z ${format} ]; then format=$log4bash_appender_file_format fi local suffix=.`date "+${format}"` filename=${1}/${2}${suffix} # Old log files deletion local previousFilenameVar=int_${4}_file_previous local maxBackupIndexVar=${4}_maxBackupIndex if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then builtin eval export $previousFilenameVar=$filename log4bash_delete_old_files "${1}" "${2}" "${suffix}" "${!maxBackupIndexVar}" else builtin echo "log4bash_filename $previousFilenameVar = ${!previousFilenameVar}" fi else filename=${1}/${2} fi builtin echo $filename } ######################## Added for testing filename_caller(){ builtin echo "filename_caller Call $1" output=$(log4bash_abs $(log4bash_filename "${log4bash_appender_f_dir}" "${log4bash_appender_f_file}" "1" "log4bash_appender_f" )) builtin echo ${output} } #### Previous logs generation for i in {1101..1120}; do file="${log4bash_appender_f_file}.2012${i:2:3}" builtin echo "${file} $i" touch -m -t "2012${i}0000" ${log4bash_appender_f_dir}/$file done for i in {1..4}; do filename_caller $i done I expect log4bash_filename function to step into the following if only when the calculated log filename is different from the previous one: if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then For this scenario to apply, I'd need ${!previousFilenameVar} to be correctly set, but it's not the case, so log4bash_filename steps into this if all the time which is really not necessary... It looks like the issue is due to the following line not working properly: builtin eval export $previousFilenameVar=$filename I have a some theories to explain why: in the original code, functions are declared and exported as readonly which makes them unable to modify global variable. I removed readonly declarations in the above sample, but probleme persists. Function calls are performed in $() which should make them run into seperated shell instances so variable modified are not exported to the main shell But I cannot manage to find a workaround to this issue... Any help is appreciated, thanks in advance!

    Read the article

  • SQL SERVER – Reducing CXPACKET Wait Stats for High Transactional Database

    - by pinaldave
    While engaging in a performance tuning consultation for a client, a situation occurred where they were facing a lot of CXPACKET Waits Stats. The client asked me if I could help them reduce this huge number of wait stats. I usually receive this kind of request from other client as well, but the important thing to understand is whether this question has any merits or benefits, or not. Before we continue the resolution, let us understand what CXPACKET Wait Stats are. The official definition suggests that CXPACKET Wait Stats occurs when trying to synchronize the query processor exchange iterator. You may consider lowering the degree of parallelism if a conflict concerning this wait type develops into a problem. (from BOL) In simpler words, when a parallel operation is created for SQL Query, there are multiple threads for a single query. Each query deals with a different set of the data (or rows). Due to some reasons, one or more of the threads lag behind, creating the CXPACKET Wait Stat. Threads which came first have to wait for the slower thread to finish. The Wait by a specific completed thread is called CXPACKET Wait Stat. Note that CXPACKET Wait is done by completed thread and not the one which are unfinished. “Note that not all the CXPACKET wait types are bad. You might experience a case when it totally makes sense. There might also be cases when this is also unavoidable. If you remove this particular wait type for any query, then that query may run slower because the parallel operations are disabled for the query.” Now let us see what the best practices to reduce the CXPACKET Wait Stats are. The suggestions, with which you will find that if you search online through the browser, would play a major role as and might be asked about their jobs In addition, might tell you that you should set ‘maximum degree of parallelism’ to 1. I do agree with these suggestions, too; however, I think this is not the final resolutions. As soon as you set your entire query to run on single CPU, you will get a very bad performance from the queries which are actually performing okay when using parallelism. The best suggestion to this is that you set ‘the maximum degree of parallelism’ to a lower number or 1 (be very careful with this – it can create more problems) but tune the queries which can be benefited from multiple CPU’s. You can use query hint OPTION (MAXDOP 0) to run the server to use parallelism. Here is the two-quick script which helps to resolve these issues: Change MAXDOP at Server Level EXEC sys.sp_configure N'max degree of parallelism', N'1' GO RECONFIGURE WITH OVERRIDE GO Run Query with all the CPU (using parallelism) USE AdventureWorks GO SELECT * FROM Sales.SalesOrderDetail ORDER BY ProductID OPTION (MAXDOP 0) GO Below is the blog post which will help you to find all the parallel query in your server. SQL SERVER – Find Queries using Parallelism from Cached Plan Please note running Queries in single CPU may worsen your performance and it is not recommended at all. Infect this can be very bad advise. I strongly suggest that you identify the queries which are offending and tune them instead of following any other suggestions. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Romanian parter Omnilogic Delivers “No Limits” Scalability, Performance, Security, and Affordability through Next-Generation, Enterprise-Grade Engineered Systems

    - by swalker
    Omnilogic SRL is a leading technology and information systems provider in Romania and central and Eastern Europe. An Oracle Value-Added Distributor Partner, Omnilogic resells Oracle software, hardware, and engineered systems to Oracle Partner Network members and provides specialized training, support, and testing facilities. Independent software vendors (ISVs) also use Omnilogic’s demonstration and testing facilities to upgrade the performance and efficiency of their solutions and those of their customers by migrating them from competitor technologies to Oracle platforms. Omnilogic also has a dedicated offering for ISV solutions, based on Oracle technology in a hosting service provider model. Omnilogic wanted to help Oracle Partners and ISVs migrate solutions to Oracle Exadata and sell Oracle Exadata to end-customers. It installed Oracle Exadata Database Machine X2-2 Quarter Rack at its data center to create a demonstration and testing environment. Demonstrations proved that Oracle Exadata achieved processing speeds up to 100 times faster than competitor systems, cut typical back-up times from 6 hours to 20 minutes, and stored 10 times more data. Oracle Partners and ISVs learned that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, without business disruption, and with reduced ongoing operating costs. Challenges A word from Omnilogic “Oracle Exadata is the new killer application—the smartest solution on the market. There is no competition.” – Sorin Dragomir, Chief Operating Officer, Omnilogic SRL Enable Oracle Partners in Romania and central and eastern Europe to achieve Oracle Exadata Ready status by providing facilities to test and optimize existing applications and build real-life proofs of concept (POCs) for new solutions on Oracle Exadata Database Machine Provide technical support and demonstration facilities for ISVs migrating their customers’ solutions from competitor technologies to Oracle Exadata to maximize performance, scalability, and security; optimize hardware and datacenter space; cut maintenance costs; and improve return on investment Demonstrate power of Oracle Exadata’s high-performance, high-capacity engineered systems for customer-facing businesses, such as government organizations, telecommunications, banking and insurance, and utility companies, which typically require continuous availability to support very large data volumes Showcase Oracle Exadata’s unchallenged online transaction processing (OLTP) capabilities that cut application run times to provide unrivalled query turnaround and user response speeds while significantly reducing back-up times and eliminating risk of unplanned outages Capitalize on providing a world-class training and demonstration environment for Oracle Exadata to accelerate sales with Oracle Partners Solutions Created a testing environment to enable Oracle Partners and ISVs to test their own solutions and those of their customers on Oracle Exadata running on Oracle Enterprise Linux or Oracle Solaris Express to benchmark performance prior to migration Leveraged expertise on Oracle Exadata to offer Oracle Exadata training, migration, support seminars and to showcase live demonstrations for Oracle Partners Proved how Oracle Exadata’s pre-engineered systems, that come assembled, configured, and ready to run, reduce deployment time and cost, minimize risk, and help customers achieve the full performance potential immediately after go live Increased processing speeds 10-fold and with zero data loss for a telecommunications provider’s client-facing customer relationship management solution Achieved performance improvements of between 6 and 100 times faster for financial and utility company applications currently running on IBM, Microsoft, or SAP HANA platforms Showed how daily closure procedures carried out overnight by banks, insurance companies, and other financial institutions to analyze each day’s business, can typically be cut from around six hours to 20 minutes, some 18 times faster, when running on Oracle Exadata Simulated concurrent back-ups while running applications under normal working conditions to prove that Oracle Exadata-based solutions can be backed up during business hours without causing bottlenecks or impacting the end-user experience Demonstrated that Oracle Exadata’s built-in analytics, data mining and OLTP capabilities make it the highest-performance, lowest-cost choice for large data warehousing operations Showed how Oracle Exadata’s columnar compression and intelligent storage architecture allows 10 times more data to be stored than on competitor platforms Demonstrated how Oracle Exadata cuts hardware requirements significantly by consolidating workloads on to fewer servers which delivers greater power efficiency and lower operating costs that competing systems from IBM and other manufacturers Proved to ISVs that migrating solutions to Oracle Exadata’s preconfigured, pre-integrated hardware and software can be completed rapidly, at low cost, and with minimal business disruption Demonstrated how storage servers, database servers, and network switches can be added incrementally and inexpensively to the Oracle Exadata platform to support business expansion On track to grow revenues by 10% in year one and by 15% annually thereafter through increased business generated from Oracle Partners and ISVs

    Read the article

  • How to use crontab, .netrc, and git push?

    - by Jon
    Hi all, I am in the process of automating the backups from various servers to a central point then pushing those config changes into a git repo so i can track any changes over time. The rest of the scripts are working well, I can copy / rsync the files across the network to a central point. The last script is to get the config files to be put into / updated in repository. The script is as follows: #!/bin/bash clear SERVERNAME="betty" SCRIPTDIR="/home/jon" GITROOT="/tmp/git" TEMPROOT="/tmp/backups" BACKUPROOTDIR="/mnt/backups" echo " - running as user: $UID" echo "backingup git config on $SERVERNAME" echo "" # check to see if root backup folder exists, otherwise create it. if [ -d $GITROOT ]; then rm -rf $GITROOT fi mkdir $GITROOT cd $GITROOT echo " - testing if home is where I think it should be!" echo $HOME echo " - testing if it can see netrc" tail $HOME/.netrc git clone http://192.168.10.97:8000/repositories/HOH-config-backups.git cd HOH-config-backups echo " - copy Configuration Folders across" cp -r $BACKUPROOTDIR/Configuration/* $GITROOT/HOH-config-backups/ cp -r $BACKUPROOTDIR/scripts $GITROOT/HOH-config-backups/ git add . git commit -a -m "committing any new configuration changes!" git push origin master echo "" echo "Git repo updated" echo "" echo " - backing up this script" FIREWIGSCRIPTLOC="$BACKUPROOTDIR/scripts/$SERVERNAME" if [ ! -d $FIREWIGSCRIPTLOC ]; then mkdir $FIREWIGSCRIPTLOC fi cp /home/jon/gitConfig.sh $FIREWIGSCRIPTLOC The git repo is on a different machine in the network using Apache and HTTP-backend.exe (smart HTTP protocol). If I run this script as me "jon" it works. If I run it in crontab it fails. git uses the /home/jon/.netrc file for authentication: machine 192.168.10.97 login gitconfig password 1234579 The log from crontab is: TERM environment variable not set. - running as user: 1000 backingup git config on betty - testing if home is where I think it should be! /home/jon - testing if it can see netrc machine 192.168.10.97 login gitconfig password 1234579 got 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd walk 08de5bc2b27b4940d9412256e76d5e3c3d9dbcdd got be880f2d306778a538d592e7a02eb19f416612f7 got bd387e8def9f77aafa798bf53e80d949aba443e8 got 1bc1a59e12775841d4c59d77c63b8a73823138c2 walk bd387e8def9f77aafa798bf53e80d949aba443e8 Getting alternates list for http://192.168.10.97:8000/repositories/HOH-config-backups.git got 030512237bca72faf211e0e8ec2906164eac34f6 got 9bc2f575240bc1f61ff7d69777ce1a165d06b184 got b8400f7f01429104a9d4786a6bb1a16d293e37c1 got 2403b5bf611010e0b401f776f0e23b09ce744838 got 1a27944c48269ef3608a8f2466e43402d06faac0 got b686f45b7d57af4fa8ca0d528bb85216d6247e19 Getting pack list for http://192.168.10.97:8000/repositories/HOH-config-backups.git Getting index for pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff Getting pack ae881957c0f0e8c22eb6cc889a22ef78eb4ce6ff which contains ff84d6d48e9326066438d167a10251218d612b3d walk b686f45b7d57af4fa8ca0d528bb85216d6247e19 got 364e30daec17814073e668f490bb84af891fe1f7 got 23f6497e7f9b80e0d90adad73bd0407a0e5ac6ce got 9e77c47574b5e23ea669afe0c23ab235e4917ee1 got 6654e0d328a216b3783e98c47206cb2d01b3353d got 28821ffd437d2689ffb82c6e4b9c3f5372c95c4b got 8c384a24f645389e4d4b08013c79e9e73a658342 got d203be0123736ee025ce20c081f1489098648dfc got 1852603bf7709e71417d8ccec02390279d533642 got fb753a26b20b04694419fce8ecdaa8dbec105cf1 got 736028997cd84dd1c135f57e9d246674b9cd0b9d got 7af836249e20096d0476a548d5be702a071cdd4b got 240dc39d9db50df63073fc7927b2d002dfa0f54c got 93abd36e3935a01011eb753b635a1a0e984bf31e got c6269e28fecf4d8d0d98b9358aecb3acff02df44 got b0aa29432f73e64032682a351d436c24b14078ab walk 240dc39d9db50df63073fc7927b2d002dfa0f54c got 58fb66d9f35f8a5e32ff4683309c5f0c2a3a03c5 got 0da2def4de0565483cdbe6b87418ee2beb122e58 got 0f6a86c6f87ed52ad2ed01e5c6edd661d364930c got 437a93d27b5bb89c739a0564a34a616e832c3ebe got fe0385abe5c0acd8462268dac330bae00e934f1b got 24259f8f5c5c9ee974a75fe3d1e07c02e3e20fe9 got d29f624bf1a5eceedaa86c10fee35f62747c7d04 got 0154e4c987132585ea7a92b77d02dba285512d6b got eda8bf526567c25ee70addb2ad3c3c6aa57eac77 got 9f3d9d7262d66f9fa4f6a13b7c86199953f4bc4e got 8e20881e19667aa22245d0598646991067455a4d got abb1123145689b35eb19519952c71253ee45fa98 got dfeff593c79b4156ce2ce1adf043d0e80356488c got e20c5b48b1d360e0bcf34189e3f3d2bbf23e92cc got b13eb81cc274780322ecf786372320343926bec9 walk 8de83868b3fac748b0a55eba16c8f668ec852abb got b5961421bbc42afe7a07cc1c8b615aba26ba74d7 got 2650ba819019df4193b482733e29ca79b29f3f2c got b3111e1be8103e91803a97a817ed81f28025aca1 got b060be934d709684f5eb5dad3c03932a3589e864 got cf70d2043f081d7a4438e9d5a290a9f986c84060 got 80bf0f1cc836feab86d6935bb7968d8555a8d531 got da318d167920e34bc6573e4fc236249ccbbee316 got d82ac853d387b760149599e6e1ab96403f6ec672 got 0005f691d1f46550fdb4e56025f52e30a5b18cc2 Initialized empty Git repository in /tmp/git/HOH-config-backups/.git/ - copy Configuration Folders across Created commit 424df2f: committing any new configuration changes! 3 files changed, 55 insertions(+), 1 deletions(-) create mode 100755 scripts/betty/gitConfig.sh error: Cannot access URL http://192.168.10.97:8000/repositories/HOH-config-backups.git/, return code 22 error: failed to push some refs to 'http://192.168.10.97:8000/repositories/HOH-config-backups.git' Git repo updated - backing up this script cp: cannot create regular file `/mnt/backups/scripts/betty/gitConfig.sh': Permission denied my crontab is: # m h dom mon dow command 04 * * * * /home/jon/gitConfig.sh > /tmp/gitconfig.log 2>&1 I open it by doing: $crontab -e i.e. not as root. I am a bit confused as to why it is not running as my user (or what user id 1000 is). Not sure what I need to do to get the push with git to work within crontab. edit: found out about the userid: jon@betty:~$ id uid=1000(jon) gid=1000(jon) groups=4(adm),20(dialout),24(cdrom),46(plugdev),109(sambashare),114(lpadmin),115(admin),1000(jon) here is my $HOME/.gitconfig file: [user] name = Jon Hawkins email = [email protected] Thanks

    Read the article

  • Innovative SPARC: Lighting a Fire Under Oracle's New Hardware Business

    - by Paulo Folgado
    "There's a certain level of things you can do with commercially available parts," says Oracle Executive Vice President Mike Splain. But, he notes, you can do so much more if you design the parts yourself. Mike Splain,EVP, OracleYou can, for example, design cryptographic accelerators into your microprocessors so customers can run their networks fully encrypted if they choose.Of course, it helps if you've already built multiple processing "cores" into those chips so they can handle all that encrypting and decrypting while still getting their other work done.System on a ChipAs the leader of Oracle Microelectronics, Mike knows how implementing clever innovations in silicon can give systems a real competitive advantage.The SPARC microprocessors that his team designed at Sun pioneered the concept of multiple cores several years ago, and the UltraSPARC T2 processor--the industry's first "system on a chip"--packs up to eight cores per chip, each running as many as eight threads at once. That's the most cores and threads of any general-purpose processor. Looking back, Mike points out that the real value of large enterprise-class servers was their ability to run a lot of very large applications in parallel."The beauty of our CMT [chip multi-threading] machines is you can get that same kind of parallel-processing capability at a much lower cost and in a much smaller footprint," he says.The Whole StackWhat has Mike excited these days is that suddenly the opportunity to innovate is much bigger as part of Oracle."In my group, we used to look up the software stack and say, 'We can do any innovation we want, provided the only thing we have to change is what's in the Solaris operating system'--or maybe Java," he says. "If we wanted to change things beyond that, we'd have to go outside the walls of Sun and we'd have to convince the vendors: 'You have to align with us, you have to test with us, you have to build for us, and then you'll reap the benefits.' Now we get access to the entire stack. We can look all the way through the stack and say, 'Okay, what would make the database go faster? What would make the middleware go faster?'"Changing the WorldMike and his microelectronics team also like the fact that Oracle is not just any software company. We're #1 in database, middleware, business intelligence, and more."We're like all the other engineers from Sun; we believe we can change the world, if we can just figure out how to get people to pay attention to us," he says. "Now there's a mechanism at Oracle--much more so than we ever had at Sun."He notes, too, that every innovation in SPARC has involved some combination of hardware and softwareoptimization."Take our cryptography framework, for example. Sure, we can accelerate rapidly, but the Solaris OS has to provide the right set of interfaces that applications can tap into," Mike says. "Same thing with our multicore architecture. We have to have software that can utilize all those threads and run in parallel." His engineers, he points out, have never been interested in producing chips that sell as mere components."Our chips are always designed to go into systems and be combined with various pieces of software," he says. "Our job is to enable the creation of systems."

    Read the article

  • Windows Azure Use Case: Agility

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: Agility in this context is defined as the ability to quickly develop and deploy an application. In theory, the speed at which your organization can develop and deploy an application on available hardware is identical to what you could deploy in a distributed environment. But in practice, this is not always the case. Having an option to use a distributed environment can be much faster for the deployment and even the development process. Implementation: When an organization designs code, they are essentially becoming a Software-as-a-Service (SaaS) provider to their own organization. To do that, the IT operations team becomes the Infrastructure-as-a-Service (IaaS) to the development teams. From there, the software is developed and deployed using an Application Lifecycle Management (ALM) process. A simplified view of an ALM process is as follows: Requirements Analysis Design and Development Implementation Testing Deployment to Production Maintenance In an on-premise environment, this often equates to the following process map: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including physical plant, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to on-premise Testing servers. If no server capacity available, more resources procured through standard budgeting and ordering processes. Manual and automated functional, load, security, etc. performed. Deployment to Production Server team involved to select platform and environments with available capacity. If no server capacity available, standard budgeting and procurement process followed. If no server capacity available, systems built, configured and put under standard organizational IT control. Systems configured for proper operating systems, patches, security and virus scans. System maintenance, HA/DR, backups and recovery plans configured and put into place. Maintenance Code changes evaluated and altered according to need. In a distributed computing environment like Windows Azure, the process maps a bit differently: Requirements Business requirements formed by Business Analysts, Developers and Data Professionals. Analysis Feasibility studies, including budget, security, manpower and other resources. Request is placed on the work task list if approved. Design and Development Code written according to organization’s chosen methodology, either on-premise or to multiple development teams on and off premise. Implementation Code checked into main branch. Code forked as needed. Testing Code deployed to Azure. Manual and automated functional, load, security, etc. performed. Deployment to Production Code deployed to Azure. Point in time backup and recovery plans configured and put into place.(HA/DR and automated backups already present in Azure fabric) Maintenance Code changes evaluated and altered according to need. This means that several steps can be removed or expedited. It also means that the business function requesting the application can be held directly responsible for the funding of that request, speeding the process further since the IT budgeting process may not be involved in the Azure scenario. An additional benefit is the “Azure Marketplace”, In effect this becomes an app store for Enterprises to select pre-defined code and data applications to mesh or bolt-in to their current code, possibly saving development time. Resources: Whitepaper download- What is ALM?  http://go.microsoft.com/?linkid=9743693  Whitepaper download - ALM and Business Strategy: http://go.microsoft.com/?linkid=9743690  LiveMeeting Recording on ALM and Windows Azure (registration required, but free): http://www.microsoft.com/uk/msdn/visualstudio/contact-us.aspx?sbj=Developing with Windows Azure (ALM perspective) - 10:00-11:00 - 19th Jan 2011

    Read the article

  • The Evolution Of C#

    - by Paulo Morgado
    The first release of C# (C# 1.0) was all about building a new language for managed code that appealed, mostly, to C++ and Java programmers. The second release (C# 2.0) was mostly about adding what wasn’t time to built into the 1.0 release. The main feature for this release was Generics. The third release (C# 3.0) was all about reducing the impedance mismatch between general purpose programming languages and databases. To achieve this goal, several functional programming features were added to the language and LINQ was born. Going forward, new trends are showing up in the industry and modern programming languages need to be more: Declarative With imperative languages, although having the eye on the what, programs need to focus on the how. This leads to over specification of the solution to the problem in hand, making next to impossible to the execution engine to be smart about the execution of the program and optimize it to run it more efficiently (given the hardware available, for example). Declarative languages, on the other hand, focus only on the what and leave the how to the execution engine. LINQ made C# more declarative by using higher level constructs like orderby and group by that give the execution engine a much better chance of optimizing the execution (by parallelizing it, for example). Concurrent Concurrency is hard and needs to be thought about and it’s very hard to shoehorn it into a programming language. Parallel.For (from the parallel extensions) looks like a parallel for because enough expressiveness has been built into C# 3.0 to allow this without having to commit to specific language syntax. Dynamic There was been lots of debate on which ones are the better programming languages: static or dynamic. The fact is that both have good qualities and users of both types of languages want to have it all. All these trends require a paradigm switch. C# is, in many ways, already a multi-paradigm language. It’s still very object oriented (class oriented as some might say) but it can be argued that C# 3.0 has become a functional programming language because it has all the cornerstones of what a functional programming language needs. Moving forward, will have even more. Besides the influence of these trends, there was a decision of co-evolution of the C# and Visual Basic programming languages. Since its inception, there was been some effort to position C# and Visual Basic against each other and to try to explain what should be done with each language or what kind of programmers use one or the other. Each language should be chosen based on the past experience and familiarity of the developer/team/project/company and not by particular features. In the past, every time a feature was added to one language, the users of the other wanted that feature too. Going forward, when a feature is added to one language, the other will work hard to add the same feature. This doesn’t mean that XML literals will be added to C# (because almost the same can be achieved with LINQ To XML), but Visual Basic will have auto-implemented properties. Most of these features require or are built on top of features of the .NET Framework and, the focus for C# 4.0 was on dynamic programming. Not just dynamic types but being able to talk with anything that isn’t a .NET class. Also introduced in C# 4.0 is co-variance and contra-variance for generic interfaces and delegates. Stay tuned for more on the new C# 4.0 features.

    Read the article

  • To sample or not to sample...

    - by [email protected]
    Ideally, we would know the exact answer to every question. How many people support presidential candidate A vs. B? How many people suffer from H1N1 in a given state? Does this batch of manufactured widgets have any defective parts? Knowing exact answers is expensive in terms of time and money and, in most cases, is impractical if not impossible. Consider asking every person in a region for their candidate preference, testing every person with flu symptoms for H1N1 (assuming every person reported when they had flu symptoms), or destructively testing widgets to determine if they are "good" (leaving no product to sell). Knowing exact answers, fortunately, isn't necessary or even useful in many situations. Understanding the direction of a trend or statistically significant results may be sufficient to answer the underlying question: who is likely to win the election, have we likely reached a critical threshold for flu, or is this batch of widgets good enough to ship? Statistics help us to answer these questions with a certain degree of confidence. This focuses on how we collect data. In data mining, we focus on the use of data, that is data that has already been collected. In some cases, we may have all the data (all purchases made by all customers), in others the data may have been collected using sampling (voters, their demographics and candidate choice). Building data mining models on all of your data can be expensive in terms of time and hardware resources. Consider a company with 40 million customers. Do we need to mine all 40 million customers to get useful data mining models? The quality of models built on all data may be no better than models built on a relatively small sample. Determining how much is a reasonable amount of data involves experimentation. When starting the model building process on large datasets, it is often more efficient to begin with a small sample, perhaps 1000 - 10,000 cases (records) depending on the algorithm, source data, and hardware. This allows you to see quickly what issues might arise with choice of algorithm, algorithm settings, data quality, and need for further data preparation. Instead of waiting for a model on a large dataset to build only to find that the results don't meet expectations, once you are satisfied with the results on the initial sample, you can  take a larger sample to see if model quality improves, and to get a sense of how the algorithm scales to the particular dataset. If model accuracy or quality continues to improve, consider increasing the sample size. Sampling in data mining is also used to produce a held-aside or test dataset for assessing classification and regression model accuracy. Here, we reserve some of the build data (data that includes known target values) to be used for an honest estimate of model error using data the model has not seen before. This sampling transformation is often called a split because the build data is split into two randomly selected sets, often with 60% of the records being used for model building and 40% for testing. Sampling must be performed with care, as it can adversely affect model quality and usability. Even a truly random sample doesn't guarantee that all values are represented in a given attribute. This is particularly troublesome when the attribute with omitted values is the target. A predictive model that has not seen any examples for a particular target value can never predict that target value! For other attributes, values may consist of a single value (a constant attribute) or all unique values (an identifier attribute), each of which may be excluded during mining. Values from categorical predictor attributes that didn't appear in the training data are not used when testing or scoring datasets. In subsequent posts, we'll talk about three sampling techniques using Oracle Database: simple random sampling without replacement, stratified sampling, and simple random sampling with replacement.

    Read the article

  • Oracle TimesTen In-Memory Database Performance on SPARC T4-2

    - by Brian
    The Oracle TimesTen In-Memory Database is optimized to run on Oracle's SPARC T4 processor platforms running Oracle Solaris 11 providing unsurpassed scalability, performance, upgradability, protection of investment and return on investment. The following demonstrate the value of combining Oracle TimesTen In-Memory Database with SPARC T4 servers and Oracle Solaris 11: On a Mobile Call Processing test, the 2-socket SPARC T4-2 server outperforms: Oracle's SPARC Enterprise M4000 server (4 x 2.66 GHz SPARC64 VII+) by 34%. Oracle's SPARC T3-4 (4 x 1.65 GHz SPARC T3) by 2.7x, or 5.4x per processor. Utilizing the TimesTen Performance Throughput Benchmark (TPTBM), the SPARC T4-2 server protects investments with: 2.1x the overall performance of a 4-socket SPARC Enterprise M4000 server in read-only mode and 1.5x the performance in update-only testing. This is 4.2x more performance per processor than the SPARC64 VII+ 2.66 GHz based system. 10x more performance per processor than the SPARC T2+ 1.4 GHz server. 1.6x better performance per processor than the SPARC T3 1.65 GHz based server. In replication testing, the two socket SPARC T4-2 server is over 3x faster than the performance of a four socket SPARC Enterprise T5440 server in both asynchronous replication environment and the highly available 2-Safe replication. This testing emphasizes parallel replication between systems. Performance Landscape Mobile Call Processing Test Performance System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 218,400 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 162,900 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 80,400 TimesTen Performance Throughput Benchmark (TPTBM) Read-Only System Processor Sockets/Cores/Threads Tps SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 7.9M SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 6.5M M4000 SPARC64 VII+, 2.66 GHz 4 16 32 3.1M T5440 SPARC T2+, 1.4 GHz 4 32 256 3.1M TimesTen Performance Throughput Benchmark (TPTBM) Update-Only System Processor Sockets/Cores/Threads Tps SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 547,800 M4000 SPARC64 VII+, 2.66 GHz 4 16 32 363,800 SPARC T3-4 SPARC T3, 1.65 GHz 4 64 512 240,500 TimesTen Replication Tests System Processor Sockets/Cores/Threads Asynchronous 2-Safe SPARC T4-2 SPARC T4, 2.85 GHz 2 16 128 38,024 13,701 SPARC T5440 SPARC T2+, 1.4 GHz 4 32 256 11,621 4,615 Configuration Summary Hardware Configurations: SPARC T4-2 server 2 x SPARC T4 processors, 2.85 GHz 256 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 4 x 300 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head SPARC T3-4 server 4 x SPARC T3 processors, 1.6 GHz 512 GB memory 1 x 8 Gbs FC Qlogic HBA 8 x 146 GB internal disks 1 x Sun Fire X4275 server configured as COMSTAR head SPARC Enterprise M4000 server 4 x SPARC64 VII+ processors, 2.66 GHz 128 GB memory 1 x 8 Gbs FC Qlogic HBA 1 x 6 Gbs SAS HBA 2 x 146 GB internal disks Sun Storage F5100 Flash Array (40 x 24 GB flash modules) 1 x Sun Fire X4275 server configured as COMSTAR head Software Configuration: Oracle Solaris 11 11/11 Oracle TimesTen 11.2.2.4 Benchmark Descriptions TimesTen Performance Throughput BenchMark (TPTBM) is shipped with TimesTen and measures the total throughput of the system. The workload can test read-only, update-only, delete and insert operations as required. Mobile Call Processing is a customer-based workload for processing calls made by mobile phone subscribers. The workload has a mixture of read-only, update, and insert-only transactions. The peak throughput performance is measured from multiple concurrent processes executing the transactions until a peak performance is reached via saturation of the available resources. Parallel Replication tests using both asynchronous and 2-Safe replication methods. For asynchronous replication, transactions are processed in batches to maximize the throughput capabilities of the replication server and network. In 2-Safe replication, also known as no data-loss or high availability, transactions are replicated between servers immediately emphasizing low latency. For both environments, performance is measured in the number of parallel replication servers and the maximum transactions-per-second for all concurrent processes. See Also SPARC T4-2 Server oracle.com OTN Oracle TimesTen In-Memory Database oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 1 October 2012.

    Read the article

  • Today at Oracle OpenWorld 2012

    - by Scott McNeil
    We have another full day of great Oracle OpenWorld keynotes, sessions, demos and customer presentations in the Seen and Be Heard threater. Here's a quick run down of what's happening today with Oracle Enterprise Manager 12c: Download the Oracle Enterprise Manager 12c OpenWorld schedule (PDF) Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) General Session Tues 2 Oct, 2012 Time Title Location 11:45 AM - 12:45 PM General Session: Using Oracle Enterprise Manager to Manage Your Own Private Cloud Moscone South - 103* 1:15 PM - 2:15 PM General Session: Breakthrough Efficiency in Private Cloud Infrastructure Moscone West - 3014 Conference Session Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Oracle Exadata/Oracle Enterprise Manager 12c: Journey into Oracle Database Cloud Moscone West - 3018 10:15 AM - 11:15 AM Bulletproof Your Application Upgrades with Secure Data Masking and Subsetting Moscone West - 3020 10:15 AM - 11:15 AM Oracle Enterprise Manager 12c: Architecture Deep Dive, Tips, and Techniques Moscone South - 303 11:45 AM - 12:45 PM RDBMS Forensics: Troubleshooting with Active Session History Moscone West - 3018 11:45 AM - 12:45 PM Building and Operationalizing Your Data Center Environment with Oracle Exalogic Moscone South - 309 11:45 AM - 12:45 PM Securely Building a National Electronic Health Record: Singapore Case Study Westin San Francisco - Concordia 1:15 PM - 2:15 PM Managing Heterogeneous Environments with Oracle Enterprise Manager Moscone West - 3018 1:15 PM - 2:15 PM Complete Oracle WebLogic Server Management with Oracle Enterprise Manager 12c Moscone South - 309 1:15 PM - 2:15 PM Database Lifecycle Management with Oracle Enterprise Manager 12c Moscone West - 3020 1:15 PM - 2:15 PM Best Practices, Key Features, Tips, Techniques for Oracle Enterprise Manager 12c Upgrade Moscone South - 307 1:15 PM - 2:15 PM Enterprise Cloud with CSC’s Foundation Services for Oracle and Oracle Enterprise Manager 12c Moscone South - 236 5:00 PM - 6:00 PM Deep Dive 3-D on Oracle Exadata Management: From Discovery to Deployment to Diagnostics Moscone West - 3018 5:00 PM - 6:00 PM Everything You Need to Know About Monitoring and Troubleshooting Oracle GoldenGate Moscone West - 3005 5:00 PM - 6:00 PM Oracle Enterprise Manager 12c: The Nerve Center of Oracle Cloud Moscone West - 3020 5:00 PM - 6:00 PM Advanced Management of Oracle E-Business Suite with Oracle Enterprise Manager Moscone West - 2016 5:00 PM - 6:00 PM Oracle Enterprise Manager 12c Cloud Control Performance Pages: Falling in Love Again Moscone West - 3014 Hands-on Labs Tues 2 Oct, 2012 Time Title Location 10:15 AM - 12:45 PM Managing the Cloud with Oracle Enterprise Manager 12c Marriott Marquis - Salon 5/6 1:15 PM - 2:15 PM Database Performance Tuning Hands-on Lab Marriott Marquis - Salon 5/6 Scene and Be Heard Theater Session Tues 2 Oct, 2012 Time Title Location 10:30 AM - 10:50 AM Start Small, Grow Big: Hands-On Oracle Private Cloud—A Step-by-Step Guide Moscone South Exhibition Hall - Booth 2407 12:30 PM - 12:50 PM Blue Medora’s Oracle Enterprise Manager Plug-in for VMware vSphere Monitoring Moscone South Exhibition Hall - Booth 2407 Demos Demo Location Application and Infrastructure Testing Moscone West - W-092 Automatic Application and SQL Tuning Moscone South, Left - S-042 Automatic Fault Diagnostics Moscone South, Left - S-036 Automatic Performance Diagnostics Moscone South, Left - S-033 Complete Care for Oracle Using My Oracle Support Moscone South, Left - S-031 Complete Cloud Lifecycle Management Moscone North, Upper Lobby - N-019 Complete Database Lifecycle Management Moscone South, Left - S-030 Comprehensive Infrastructure as a Service via Oracle Enterprise Manager Moscone South, Left - S-045 Data Masking and Data Subsetting Moscone South, Left - S-034 Database Testing with Oracle Real Application Testing Moscone South, Left - S-041 Identity Management Monitoring with Oracle Enterprise Manager Moscone South, Right - S-212 Mission-Critical, SPARC-Powered Infrastructure as a Service Moscone South, Center - S-157 Oracle E-Business Suite, Siebel, JD Edwards, and PeopleSoft Management Moscone West - W-084 Oracle Enterprise Manager Cloud Control 12c Overview Moscone South, Left - S-039 Oracle Enterprise Manager: Complete Data Center Management Moscone South, Left - S-040 Oracle Exadata Management Moscone South, Center - Oracle Exalogic Management Moscone South, Center - Oracle Fusion Applications Management Moscone West - W-018 Oracle Real User Experience Insight Moscone South, Right - S-226 Oracle WebLogic Server Management and Java Diagnostics Moscone South, Right - S-206 Platform as a Service Using Oracle Enterprise Manager Moscone North, Upper Lobby - N-020 SOA Management Moscone South, Right - S-225 Self-Service Application Testing on Private and Public Clouds Moscone West - W-110 Oracle OpenWorld Music Festival New this year is Oracle’s first annual Oracle OpenWorld Musical Festival, featuring some of today's breakthrough musicians from around the country and the world. It's five nights of back-to-back performances in the heart of San Francisco—free to registered attendees. See the lineup Not Heading to OpenWorld—Watch it Live! Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Upgrades in 5 Easy Pieces

    - by Anne R.
    Even though there are a few select tasks that I have to do once or twice a year, I can’t remember how to do them! Or where to find the bits and pieces to complete the task. So I love it when someone consolidates everything under one spot. That’s what the CRM On Demand team has done with the upgrade information. Specifically, they have: Provided a “one-stop” area for managing upgrades at your company. Broken down the upgrade process into 5 (yes, 5) steps. Explained when and how to perform each step with dates specific to your pod. Included details about each step, visible by expanding the step. Translated the steps into 11 languages. Added a list of release-specific resources with links from the page. Now, just head for the Training and Support portal, click the Release Info tab, and walk through the “5 Essential Steps to a Successful Upgrade.” Before you continue, though, select your language from the drop-down list on the Release Info page. CRM On Demand now has the upgrade steps translated into 11 languages. On the Step page, you can expand each section in sequence and follow the more detailed instructions that appear. This will ensure that you’ve covered all your bases for each upgrade. Here’s a shortened version of the information that you’ll find: 1. Verify your Primary Contact Information. Have you checked your primary contact information to make sure you’re being notified of all upgrade information? Or do you want more users to receive upgrade announcements? This section provides you with the navigation path to do that in CRM On Demand. 2. Review your Key Upgrade Dates. If you expand this step, a nice table appears with your critical dates for the various milestones. IMPORTANT: When your CRM On Demand pod has been officially added to the upgrade schedule, closer to the release date itself, this table will display your specific timetable. 3. Migrate your Customizations from the Staging Environment before the Snapshot Date. Oracle refreshes the Staging data with a copy of your Production data made on the Production Snapshot Date. So this section lists considerations relevant to this step. It also reminds you of the 2-week period when you should not be making any changes in your Staging environment.   4. Conduct your Upgrade Validation on the Staging Environment. When the Customer Validation Testing period begins, you need to log in to your Staging Environment to validate that your key business processes and customizations continue to behave as expected. If your company utilizes Web Services, Web Links, Web Applets or Workflow, focus on testing these first. You generally have about two weeks for testing. If you run into problems during this time, follow the instructions shown in this section for logging a service request. It describes exactly how to fill out the fields in the SR for the fastest resolution. 5. Conduct "White Glove" Testing in your Upgraded Production Environment. Before users start using the upgrade, you should access a few tabs and reports. Doing this actually warms up the cache so that frequently used pages and reports will come up at normal speed on Monday morning, when users log in to the upgraded system. Resources listed under this step help you in further preparing for the upgrade. Now there’s also a new Documentation section on the right with links to these release-specific resources.   Very nice, I commented, when discussing these improvements with the “responsible party.” She confirmed that, yes, they tried to consolidate the upgrade information, translate it for better communication, simplify it into 5 easy pieces, and drive admins responsible for handling upgrades to this one site instead of sending out elaborate emails. Yes, I just love it when someone practically reaches out and holds my hand through a process. Next best thing to a wizard!

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 6

    - by MarkPearl
    Learning Outcomes Discuss the physical characteristics of magnetic disks Describe how data is organized and accessed on a magnetic disk Discuss the parameters that play a role in the performance of magnetic disks Describe different optical memory devices Magnetic Disk The way data is stored on and retried from magnetic disks Data is recorded on and later retrieved form the disk via a conducting coil named the head (in many systems there are two heads) The writ mechanism exploits the fact that electricity flowing through a coil produces a magnetic field. Electric pulses are sent to the write head, and the resulting magnetic patterns are recorded on the surface below with different patterns for positive and negative currents The physical characteristics of a magnetic disk   Summarize from book   The factors that play a role in the performance of a disk Seek time – the time it takes to position the head at the track Rotational delay / latency – the time it takes for the beginning of the sector to reach the head Access time – the sum of the seek time and rotational delay Transfer time – the time it takes to transfer data RAID The rate of improvement in secondary storage performance has been considerably less than the rate for processors and main memory. Thus secondary storage has become a bit of a bottleneck. RAID works on the concept that if one disk can be pushed so far, additional gains in performance are to be had by using multiple parallel components. Points to note about RAID… RAID is a set of physical disk drives viewed by the operating system as a single logical drive Data is distributed across the physical drives of an array in a scheme known as striping Redundant disk capacity is used to store parity information, which guarantees data recoverability in case of a disk failure (not supported by RAID 0 or RAID 1) Interesting to note that the increase in the number of drives, increases the probability of failure. To compensate for this decreased reliability RAID makes use of stored parity information that enables the recovery of data lost due to a disk failure.   The RAID scheme consists of 7 levels…   Category Level Description Disks Required Data Availability Large I/O Data Transfer Capacity Small I/O Request Rate Striping 0 Non Redundant N Lower than single disk Very high Very high for both read and write Mirroring 1 Mirrored 2N Higher than RAID 2 – 5 but lower than RAID 6 Higher than single disk Up to twice that of a signle disk for read Parallel Access 2 Redundant via Hamming Code N + m Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Parallel Access 3 Bit interleaved parity N + 1 Much higher than single disk Highest of all listed alternatives Approximately twice that of a single disk Independent Access 4 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, significantly lower than single disk for write Similar to RAID 0 for read, significantly lower than single disk for write Independent Access 5 Block interleaved parity N + 1 Much higher than single disk Similar to RAID 0 for read, lower than single disk for write Similar to RAID 0 for read, generally  lower than single disk for write Independent Access 6 Block interleaved parity N + 2 Highest of all listed alternatives Similar to RAID 0 for read; lower than RAID 5 for write Similar to RAID 0 for read, significantly lower than RAID 5  for write   Read page 215 – 221 for detailed explanation on RAID levels Optical Memory There are a variety of optical-disk systems available. Read through the table on page 222 – 223 Some of the devices include… CD CD-ROM CD-R CD-RW DVD DVD-R DVD-RW Blue-Ray DVD Magnetic Tape Most modern systems use serial recording – data is lade out as a sequence of bits along each track. The typical recording used in serial is referred to as serpentine recording. In this technique when data is being recorded, the first set of bits is recorded along the whole length of the tape. When the end of the tape is reached the heads are repostioned to record a new track, and the tape is again recorded on its whole length, this time in the opposite direction. That process continued back and forth until the tape is full. To increase speed, the read-write head is capable of reading and writing a number of adjacent tracks simultaneously. Data is still recorded serially along individual tracks, but blocks in sequence are stored on adjacent tracks as suggested. A tape drive is a sequential access device. Magnetic tape was the first kind of secondary memory. It is still widely used as the lowest-cost, slowest speed member of the memory hierarchy.

    Read the article

  • Working with Analytic Workflow Manager (AWM) - Part 8 Cube Metadata Analysis

    - by Mohan Ramanuja
    CUBE SIZEselect dbal.owner||'.'||substr(dbal.table_name,4) awname, sum(dbas.bytes)/1024/1024 as mb, dbas.tablespace_name from dba_lobs dbal, dba_segments dbas where dbal.column_name = 'AWLOB' and dbal.segment_name = dbas.segment_name group by dbal.owner, dbal.table_name, dbas.tablespace_name order by dbal.owner, dbal.table_name SESSION RESOURCES select vses.username||':'||vsst.sid username, vstt.name, max(vsst.value) valuefrom v$sesstat vsst, v$statname vstt, v$session vseswhere vstt.statistic# = vsst.statistic# and vsst.sid = vses.sid andVSES.USERNAME LIKE ('ATTRIBDW_OWN') ANDvstt.name in ('session pga memory', 'session pga memory max', 'session uga memory','session uga memory max', 'session cursor cache count', 'session cursor cache hits', 'session stored procedure space', 'opened cursors current', 'opened cursors cumulative') andvses.username is not null group by vsst.sid, vses.username, vstt.name order by vsst.sid, vses.username, vstt.name OLAP PGA USE select 'OLAP Pages Occupying: '|| round((((select sum(nvl(pool_size,1)) from v$aw_calc)) / (select value from v$pgastat where name = 'total PGA inuse')),2)*100||'%' info from dual union select 'Total PGA Inuse Size: '||value/1024||' KB' info from v$pgastat where name = 'total PGA inuse' union select 'Total OLAP Page Size: '|| round(sum(nvl(pool_size,1))/1024,0)||' KB' info from v$aw_calc order by info desc OLAP PGA USAGE PER USER select vs.username, vs.sid, round(pga_used_mem/1024/1024,2)||' MB' pga_used, round(pga_max_mem/1024/1024,2)||' MB' pga_max, round(pool_size/1024/1024,2)||' MB' olap_pp, round(100*(pool_hits-pool_misses)/pool_hits,2) || '%' olap_ratio from v$process vp, v$session vs, v$aw_calc va where session_id=vs.sid and addr = paddr CUBE LOADING SCRIPT REM The 'set define off' statement is needed only if running this script through SQLPlus.REM If you are using another tool to run this script, the line below may be commented out.set define offBEGIN  DBMS_CUBE.BUILD(    'VALIDATE  ATTRIBDW_OWN.CURRENCY USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNT USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.DATEDIM USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.CUSIP USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNTRETURN',    'CCCCC', -- refresh methodfalse, -- refresh after errors    0, -- parallelismtrue, -- atomic refreshtrue, -- automatic orderfalse); -- add dimensionsEND;/BEGIN  DBMS_CUBE.BUILD(    '  ATTRIBDW_OWN.CURRENCY USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNT USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.DATEDIM USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.CUSIP USING  (    LOAD NO SYNCH,    COMPILE SORT  ),  ATTRIBDW_OWN.ACCOUNTRETURN',    'CCCCC', -- refresh methodfalse, -- refresh after errors    0, -- parallelismtrue, -- atomic refreshtrue, -- automatic orderfalse); -- add dimensionsEND;/ VISUALIZATION OBJECT - AW$ATTRIBDW_OWN  CREATE TABLE "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"        (            "PS#"    NUMBER(10,0),            "GEN#"   NUMBER(10,0),            "EXTNUM" NUMBER(8,0),            "AWLOB" BLOB,            "OBJNAME"  VARCHAR2(256 BYTE),            "PARTNAME" VARCHAR2(256 BYTE)        )        PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE        (            BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "AWLOB"        )        STORE AS SECUREFILE        (            TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        )        PARTITION BY RANGE        (            "GEN#"        )        SUBPARTITION BY HASH        (            "PS#",            "EXTNUM"        )        SUBPARTITIONS 8        (            PARTITION "PTN1" VALUES LESS THAN (1) PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOB ("AWLOB") STORE AS SECUREFILE ( TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE READS LOGGING NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ( SUBPARTITION "SYS_SUBP661" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP662" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP663" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP664" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP665" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION            "SYS_SUBP666" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP667" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP668" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" ) ,            PARTITION "PTNN" VALUES LESS THAN (MAXVALUE) PCTFREE 10 PCTUSED 40 INITRANS 4 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOB ("AWLOB") STORE AS SECUREFILE ( TABLESPACE "ATTRIBDW_DATA" DISABLE STORAGE IN ROW CHUNK 8192 RETENTION MIN 1 CACHE NOCOMPRESS KEEP_DUPLICATES STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)) ( SUBPARTITION "SYS_SUBP669" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP670" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP671" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP672" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP673" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION            "SYS_SUBP674" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP675" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_SUBP676" LOB ("AWLOB") STORE AS ( TABLESPACE "ATTRIBDW_DATA" ) TABLESPACE "ATTRIBDW_DATA" )        ) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."ATTRIBDW_OWN_I$" ON "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"    (        "PS#", "GEN#", "EXTNUM"    )    PCTFREE 10 INITRANS 4 MAXTRANS 255 COMPUTE STATISTICS STORAGE    (        INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT    )    TABLESPACE "ATTRIBDW_DATA" ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000406980C00004$$" ON "ATTRIBDW_OWN"."AW$ATTRIBDW_OWN"    (        PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" LOCAL (PARTITION "SYS_IL_P711" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) ( SUBPARTITION "SYS_IL_SUBP695" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP696" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP697" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP698" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP699" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP700" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP701" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP702" TABLESPACE "ATTRIBDW_DATA" ) , PARTITION "SYS_IL_P712" PCTFREE 10 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) ( SUBPARTITION "SYS_IL_SUBP703" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP704" TABLESPACE        "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP705" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP706" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP707" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP708" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP709" TABLESPACE "ATTRIBDW_DATA" , SUBPARTITION "SYS_IL_SUBP710" TABLESPACE "ATTRIBDW_DATA" ) ) PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE BUILD LOG  CREATE TABLE "ATTRIBDW_OWN"."CUBE_BUILD_LOG"        (            "BUILD_ID"          NUMBER,            "SLAVE_NUMBER"      NUMBER,            "STATUS"            VARCHAR2(10 BYTE),            "COMMAND"           VARCHAR2(25 BYTE),            "BUILD_OBJECT"      VARCHAR2(30 BYTE),            "BUILD_OBJECT_TYPE" VARCHAR2(10 BYTE),            "OUTPUT" CLOB,            "AW"            VARCHAR2(30 BYTE),            "OWNER"         VARCHAR2(30 BYTE),            "PARTITION"     VARCHAR2(50 BYTE),            "SCHEDULER_JOB" VARCHAR2(100 BYTE),            "TIME" TIMESTAMP (6)WITH TIME ZONE,        "BUILD_SCRIPT" CLOB,        "BUILD_TYPE"            VARCHAR2(22 BYTE),        "COMMAND_DEPTH"         NUMBER(2,0),        "BUILD_SUB_OBJECT"      VARCHAR2(30 BYTE),        "REFRESH_METHOD"        VARCHAR2(1 BYTE),        "SEQ_NUMBER"            NUMBER,        "COMMAND_NUMBER"        NUMBER,        "IN_BRANCH"             NUMBER(1,0),        "COMMAND_STATUS_NUMBER" NUMBER,        "BUILD_NAME"            VARCHAR2(100 BYTE)        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "OUTPUT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        )        LOB        (            "BUILD_SCRIPT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407294C00013$$" ON "ATTRIBDW_OWN"."CUBE_BUILD_LOG"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407294C00007$$" ON "ATTRIBDW_OWN"."CUBE_BUILD_LOG" ( PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE DIMENSION COMPILE  CREATE TABLE "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"        (            "ID"               NUMBER,            "SEQ_NUMBER"       NUMBER,            "ERROR#"           NUMBER(8,0) NOT NULL ENABLE,            "ERROR_MESSAGE"    VARCHAR2(2000 BYTE),            "DIMENSION"        VARCHAR2(100 BYTE),            "DIMENSION_MEMBER" VARCHAR2(100 BYTE),            "MEMBER_ANCESTOR"  VARCHAR2(100 BYTE),            "HIERARCHY1"       VARCHAR2(100 BYTE),            "HIERARCHY2"       VARCHAR2(100 BYTE),            "ERROR_CONTEXT" CLOB        )        SEGMENT CREATION DEFERRED PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING TABLESPACE "ATTRIBDW_DATA" LOB        (            "ERROR_CONTEXT"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR#"IS    'Error number being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR_MESSAGE"IS    'Error text being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."DIMENSION"IS    'Name of dimension being compiled';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."DIMENSION_MEMBER"IS    'Problem dimension member';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."MEMBER_ANCESTOR"IS    'Problem dimension member''s parent';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."HIERARCHY1"IS    'First hierarchy involved in error';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."HIERARCHY2"IS    'Second hierarchy involved in error';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"."ERROR_CONTEXT"IS    'Extra information for error';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"IS    'Cube dimension compile log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407307C00010$$" ON "ATTRIBDW_OWN"."CUBE_DIMENSION_COMPILE"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE( INITIAL 1048576 NEXT 1048576 MAXEXTENTS 2147483645) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE OPERATING LOG  CREATE TABLE "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"        (            "INST_ID"    NUMBER NOT NULL ENABLE,            "SID"        NUMBER NOT NULL ENABLE,            "SERIAL#"    NUMBER NOT NULL ENABLE,            "USER#"      NUMBER NOT NULL ENABLE,            "SQL_ID"     VARCHAR2(13 BYTE),            "JOB"        NUMBER,            "ID"         NUMBER,            "PARENT_ID"  NUMBER,            "SEQ_NUMBER" NUMBER,            "TIME" TIMESTAMP (6)WITH TIME ZONE NOT NULL ENABLE,        "LOG_LEVEL"    NUMBER(4,0) NOT NULL ENABLE,        "DEPTH"        NUMBER(4,0),        "OPERATION"    VARCHAR2(15 BYTE) NOT NULL ENABLE,        "SUBOPERATION" VARCHAR2(20 BYTE),        "STATUS"       VARCHAR2(10 BYTE) NOT NULL ENABLE,        "NAME"         VARCHAR2(20 BYTE) NOT NULL ENABLE,        "VALUE"        VARCHAR2(4000 BYTE),        "DETAILS" CLOB        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "DETAILS"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."INST_ID"IS    'Instance ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SID"IS    'Session ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SERIAL#"IS    'Session serial#';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."USER#"IS    'User ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SQL_ID"IS    'Executing SQL statement ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."JOB"IS    'Identifier of job';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."PARENT_ID"IS    'Parent operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."TIME"IS    'Time of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."LOG_LEVEL"IS    'Verbosity level of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."DEPTH"IS    'Nesting depth of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."OPERATION"IS    'Current operation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."SUBOPERATION"IS    'Current suboperation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."STATUS"IS    'Status of current operation';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."NAME"IS    'Name of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."VALUE"IS    'Value of record';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"."DETAILS"IS    'Extra information for record';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"IS    'Cube operations log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407301C00018$$" ON "ATTRIBDW_OWN"."CUBE_OPERATIONS_LOG"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ; CUBE REJECTED RECORDS CREATE TABLE "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"        (            "ID"            NUMBER,            "SEQ_NUMBER"    NUMBER,            "ERROR#"        NUMBER(8,0) NOT NULL ENABLE,            "ERROR_MESSAGE" VARCHAR2(2000 BYTE),            "RECORD#"       NUMBER(38,0),            "SOURCE_ROW" ROWID,            "REJECTED_RECORD" CLOB        )        SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE        (            INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT        )        TABLESPACE "ATTRIBDW_DATA" LOB        (            "REJECTED_RECORD"        )        STORE AS BASICFILE        (            TABLESPACE "ATTRIBDW_DATA" ENABLE STORAGE IN ROW CHUNK 8192 RETENTION NOCACHE LOGGING STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)        ) ;COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ID"IS    'Current operation ID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."SEQ_NUMBER"IS    'Cube build log sequence number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ERROR#"IS    'Error number being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."ERROR_MESSAGE"IS    'Error text being reported';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."RECORD#"IS    'Rejected record number';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."SOURCE_ROW"IS    'Rejected record''s ROWID';    COMMENT ON COLUMN "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"."REJECTED_RECORD"IS    'Rejected record copy';    COMMENT ON TABLE "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"IS    'Cube rejected records log';CREATE UNIQUE INDEX "ATTRIBDW_OWN"."SYS_IL0000407304C00007$$" ON "ATTRIBDW_OWN"."CUBE_REJECTED_RECORDS"    (        PCTFREE 10 INITRANS 2 MAXTRANS 255 STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "ATTRIBDW_DATA" PARALLEL (DEGREE 0 INSTANCES 0) ;

    Read the article

  • Passed: Exam 70-480: Programming in HTML5 with JavaScript and CSS3

    First off: Mission accomplished successfully. And it was fun! Using the resources listed in my previous article about Learning Content, I'd like to thank Microsoft Technical Evangelists Jeremy Foster and Michael Palermo for their excellent jump start videos on Channel 9, and the various authors at Pluralsight. Local Prometric testing centre Back in November I chose a local testing centre which was the easiest to access from my office despite the horrible traffic you might experience here on the island. Actually, it was not the closest one. But due to their website, their awards as Microsoft Learning Center, and my general curiosity about the premises, I gave FRCI my priority. Boy, how should I regret this decision this morning... The official Prometric exam guide asks any attendee to show up at least 30 minutes prior to the scheduled time of the test. Well, this should have been the easier part but unfortunately due to heavier traffic than usual I arrived only 20 minutes before time. Not too bad but more to come. The building called 'le Hub' is nicely renovated and provides the right environment for an IT group of companies like FRCI. I think they have currently 5 independent IT departments over there. Even the handling at the reception was straight forward, welcoming and at my ease. But then... first shock: "We don't have any exam registration for today." - Hm, that's nice... Here's my mail confirmation from Prometric. First attack successfully handled and the lady went off again to check their records. Next shock: A couple of minutes later, another guy tries to explain me that "the staff of the testing centre is already on vacation and the centre is officially closed." - Are you kidding me? Here's the official confirmation by Prometric, and I don't find it funny that I take a day off today only to hear this kind of blubbering nonsense. I thought that I'll be on the safe side choosing a company with a good reputation here on the island. Another 40 (!) minutes later, they finally come back to the waiting area with a pre-filled form about the test appointment. And finally, after an hour of waiting, discussing, restarting the testing PC, and lots of talk, I am allowed to sit down and take the exam. Exam details Well, you know the rules. Signing an NDA doesn't allow me to provide you any details about the questions or topics that have been covered. Please check out the official exam description, and you're on the right way. Sorry, guys... ;-) The result "Congratulations! You have passed this Microsoft Certification exam." - In general, I have to admit that the parts on HTML5 and CSS3 were the easiest after all, and that I have to get myself a little bit more familiar with certain Javascript features like class definitions, inheritance and data security. Anyway, exam passed - who cares about the details? Next goal Of course, the journey to Microsoft Certifications continues and my next goal is to pass exams 70-481 - Essentials of Developing Windows Store Apps using HTML5 and JavaScript and 70-482 - Advanced Windows Store App Development using HTML5 and JavaScript. This would allow me to achieve the certification of MCSD: Windows Store Apps using HTML5. I guess, during 2013 I'll be busy with various learning and teaching lessons.

    Read the article

  • Dependency Injection Introduction

    - by MarkPearl
    I recently was going over a great book called “Dependency Injection in .Net” by Mark Seeman. So far I have really enjoyed the book and would recommend anyone looking to get into DI to give it a read. Today I thought I would blog about the first example Mark gives in his book to illustrate some of the benefits that DI provides. The ones he lists are Late binding Extensibility Parallel Development Maintainability Testability To illustrate some of these benefits he gives a HelloWorld example using DI that illustrates some of the basic principles. It goes something like this… class Program { static void Main(string[] args) { var writer = new ConsoleMessageWriter(); var salutation = new Salutation(writer); salutation.Exclaim(); Console.ReadLine(); } } public interface IMessageWriter { void Write(string message); } public class ConsoleMessageWriter : IMessageWriter { public void Write(string message) { Console.WriteLine(message); } } public class Salutation { private readonly IMessageWriter _writer; public Salutation(IMessageWriter writer) { _writer = writer; } public void Exclaim() { _writer.Write("Hello World"); } }   If you had asked me a few years ago if I had thought this was a good approach to solving the HelloWorld problem I would have resounded “No”. How could the above be better than the following…. class Program { static void Main(string[] args) { Console.WriteLine("Hello World"); Console.ReadLine(); } }  Today, my mind-set has changed because of the pain of past programs. So often we can look at a small snippet of code and make judgements when we need to keep in mind that we will most probably be implementing these patterns in projects with hundreds of thousands of lines of code and in projects that we have tests that we don’t want to break and that’s where the first solution outshines the latter. Let’s see if the first example achieves some of the outcomes that were listed as benefits of DI. Could I test the first solution easily? Yes… We could write something like the following using NUnit and RhinoMocks… [TestFixture] public class SalutationTests { [Test] public void ExclaimWillWriteCorrectMessageToMessageWriter() { var writerMock = MockRepository.GenerateMock<IMessageWriter>(); var sut = new Salutation(writerMock); sut.Exclaim(); writerMock.AssertWasCalled(x => x.Write("Hello World")); } }   This would test the existing code fine. Let’s say we then wanted to extend the original solution so that we had a secure message writer. We could write a class like the following… public class SecureMessageWriter : IMessageWriter { private readonly IMessageWriter _writer; private readonly string _secretPassword; public SecureMessageWriter(IMessageWriter writer, string secretPassword) { _writer = writer; _secretPassword = secretPassword; } public void Write(string message) { if (_secretPassword == "Mark") { _writer.Write(message); } else { _writer.Write("Unauthenticated"); } } }   And then extend our implementation of the program as follows… class Program { static void Main(string[] args) { var writer = new SecureMessageWriter(new ConsoleMessageWriter(), "Mark"); var salutation = new Salutation(writer); salutation.Exclaim(); Console.ReadLine(); } }   Our application has now been successfully extended and yet we did very little code change. In addition, our existing tests did not break and we would just need add tests for the extended functionality. Would this approach allow parallel development? Well, I am in two camps on parallel development but with some planning ahead of time it would allow for it as you would simply need to decide on the interface signature and could then have teams develop different sections programming to that interface. So,this was really just a quick intro to some of the basic concepts of DI that Mark introduces very successfully in his book. I am hoping to blog about this further as I continue through the book to list some of the more complex implementations of containers.

    Read the article

  • Insert video clip in a lyx presentation and play it in GNU/Linux.

    - by Orjanp
    How can I insert a video clip into a presentation created in Lyx? Have seen http://www.latex-community.org/forum/viewtopic.php?f=19&t=48. It works, but there the video starts in the background in an external player. I would prefer it to be played in the presentation itself. If an external player is used it it should at least start in the foreground. But the presentation takes the foreground. Using evince in GNU/linux as pdf viewer. Beamer is used as a presentation template. Is it possible to play a video file in an embedded player in the presentation itself? Created an example presentation. The code is found below. \documentclass[english]{beamer} \usepackage{mathptmx} \usepackage[T1]{fontenc} \usepackage[latin9]{inputenc} \usepackage{amsmath} \usepackage{amssymb} \makeatletter %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% Textclass specific LaTeX commands. % this default might be overridden by plain title style \newcommand\makebeamertitle{\frame{\maketitle}}% \AtBeginDocument{ \let\origtableofcontents=\tableofcontents \def\tableofcontents{\@ifnextchar[{\origtableofcontents}{\gobbletableofcontents}} \def\gobbletableofcontents#1{\origtableofcontents} } \makeatletter \long\def\lyxframe#1{\@lyxframe#1\@lyxframestop}% \def\@lyxframe{\@ifnextchar<{\@@lyxframe}{\@@lyxframe<*>}}% \def\@@lyxframe<#1>{\@ifnextchar[{\@@@lyxframe<#1>}{\@@@lyxframe<#1>[]}} \def\@@@lyxframe<#1>[{\@ifnextchar<{\@@@@@lyxframe<#1>[}{\@@@@lyxframe<#1>[<*>][}} \def\@@@@@lyxframe<#1>[#2]{\@ifnextchar[{\@@@@lyxframe<#1>[#2]}{\@@@@lyxframe<#1>[#2][]}} \long\def\@@@@lyxframe<#1>[#2][#3]#4\@lyxframestop#5\lyxframeend{% \frame<#1>[#2][#3]{\frametitle{#4}#5}} \makeatother \def\lyxframeend{} % In case there is a superfluous frame end %%%%%%%%%%%%%%%%%%%%%%%%%%%%%% User specified LaTeX commands. \usetheme{Warsaw} \usepackage{hyperref} \makeatother \usepackage{babel} \begin{document} \title{Testing video} \makebeamertitle \lyxframeend{}\section{Testing video} \lyxframeend{}\subsection{Testing video} \lyxframeend{}\lyxframe{Testing video} \href{run:video.wmv}{Movie} \appendix \lyxframeend{} \end{document}

    Read the article

  • Is Annotation in Javascript? If not, how to switch between debug/productive modes in declarative way

    - by Michael Mao
    Hi all: This is but a curious question. I cannot find any useful links from Google so it might be better to ask the gurus here. The point is: is there a way to make "annotation" in javascript source code so that all code snippets for testing purpose can be 'filtered out' when project is deployed from test field into the real environment? I know in Java, C# or some other languages, you can assign an annotation just above the function name, such as : // it is good to remove the annoying warning messages @SuppressWarnings("unchecked") public class Tester extends TestingPackage { ... } Basically I've got a lot of testing code that prints out something into FireBug console. I don't wanna manually "comment out" them because the guy that is going to maintain the code might not be aware of all the testing functions, so he/she might just miss one function and the whole thing can be brought down to its knees. One other thing, we might use a minimizer to "shrink" the source code into "human unreadable" code and boost up performance (just like jQuery.min), so trying to match testing section out of the mess is not possible for plain human eyes in the future. Any suggestion is much appreciated.

    Read the article

  • Documentation and Build system for Mono/C#

    - by dcolish
    I'm starting out on a new project and a team member has decided to use C# as the implementation language. I don't have a lot of experience in C#, but a brief reading shows that it's very capable of being a complete cross-platform vm. Beyond the language, I've been having trouble selecting tools and workflows for managing the code as the project grows. It should be fairly small (<10K lines) but I would like to have the ability to generate documentation as the project grows, manage any external dependencies that we decide to use, and automate builds and testing. I am wondering what tools are commonly used or considered best practices for this language. I am mainly concerned with how would a build system potentially work on *nix as well as windows? Are there C# specific tools or is Make more common? In addition, I'd like to use a dvcs, but it doesn't look like Visual Studio and MonoDevelop support the same ones. What's the common vcs of choice for C#? For testing sort of Unit testing is available for C#/Mono? Finally, I know that there are good doc generators, but with the question of the build system, I would really like to have that just be a single step in the build similar to how testing is a step. Normally I'd automate with Hudson, but I am wondering if there is something more specific to the platform. Overall, I'd love to see a solution that provides a decent workflow on both windows and *nix without a heavy admin burden. I am pretty sure this is the holy grail of project management, so anything that puts me on that path is awesome.

    Read the article

  • Issues with securing ELMAH file

    - by Kumar
    I am trying to allow the access to the elmah.axd file for only a perticular login and all others should be denied. I have followed Phil Haack's tutorial for securing the file. <httpHandlers> <remove verb="*" path="*.asmx"/> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add verb="GET,HEAD" path="ScriptResource.axd" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" validate="false"/> <add verb="POST,GET,HEAD" path="admin/elmah.axd" type="Elmah.ErrorLogPageFactory, Elmah" /> </httpHandlers> <location path="admin"> <system.web> <authorization> <allow users="[email protected]"/> <deny users="*"/> </authorization> </system.web> </location> First I logged in as [email protected] and tried to access the http://localhost:58961/admin/elmah.axd file and I am rightly being redirected to the Login.aspx page. Next I logged in as [email protected] and was able to access the elmah file at http://localhost:58961/admin/elmah.axd. Now I logged in again as [email protected] and I was able to access the emlah file now. What is the reason for this behavior?

    Read the article

  • Sending Email with attachment, sends a blank file.

    - by pankaj
    I am using this code: File myDir = new File(getApplicationContext().getFilesDir().getAbsolutePath()); try { Log.i("CSV Testing ", "CSV file creating"); FileWriter fw = new FileWriter(myDir + "/myfile.csv"); // // write data to file // Log.i("CSV Testing ", "CSV file created and your data has been saved"); // Process for sending email with CSV file File CSVFile = new File(myDir,"myfile.csv"); if(CSVFile.exists()) { Log.i("CSV FILE", "CSV file exists"); } else { Log.i("CSV FILE", "CSV file not exists"); } Log.i("SEND EMAIL TESTING", "Email sending"); Intent emailIntent = new Intent(android.content.Intent.ACTION_SEND); emailIntent.setType("text/csv"); emailIntent .putExtra(android.content.Intent.EXTRA_EMAIL, new String[]{"myemailid"}); emailIntent .putExtra(android.content.Intent.EXTRA_SUBJECT, "my subject"); emailIntent .putExtra(android.content.Intent.EXTRA_TEXT, "_____________\n Regards \n Pankaj \n ____________ "); emailIntent.putExtra(Intent.EXTRA_STREAM, Uri.parse("file://" + CSVFile.getAbsolutePath())); emailIntent.setType("message/rfc822"); // Shows all application that supports SEND activity try { startActivity(Intent.createChooser(emailIntent, "Send mail...")); } catch (android.content.ActivityNotFoundException ex) { showMSG("There are no email clients installed."); } Log.i("SEND EMAIL TESTING", "Email sent"); } catch (FileNotFoundException e) { Log.i("ExportCSV Exception", e.toString()); } catch (IOException e) { Log.i("ExportCSV Exception", e.toString()); } But it sends myfile.csv as a blank file. I checked it from file explorer, where myfile.csv is not blank and contains right data. How can I solve this?

    Read the article

< Previous Page | 148 149 150 151 152 153 154 155 156 157 158 159  | Next Page >