Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 587/1620 | < Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >

  • Why some links appear in a new tab, others in a new window?

    - by SAMIR BHOGAYTA
    Originally, it was made to resolve problems on IE8 32 bits when you use a 32 bits OS. I changed "%ProgramFiles(x86)%" var, and now my issue is resolved for IE8 32 bits on my Windows 7 64 bits. Please try it, and tell me if everything work for you. For Win 7 64 bits : 1 - Create a new notepad document and paste this text : @echo off echo. echo IEREREG Version 1.07 for IE8 27.03.2009 echo by Kai Schaetzl http://iefaq.info echo installs and registers (if suitable) all DLLs known to be used by IE8. echo should only take a few seconds, but please be patient echo. REM ****************************** echo registering IE files REM IE files (= part of setup) regsvr32 /s /i browseui.dll REM regsvr32 /s /i browseui.dll,NI (unnecessary) regsvr32 /s corpol.dll regsvr32 /s dxtmsft.dll regsvr32 /s dxtrans.dll REM simple HTML Mail API regsvr32 /s "%ProgramFiles(x86)%\internet explorer\hmmapi.dll" REM group policy snap-in regsvr32 /s ieaksie.dll REM smart screen regsvr32 /s ieapfltr.dll REM ieak branding regsvr32 /s iedkcs32.dll REM dev tools regsvr32 /s "%ProgramFiles(x86)%\internet explorer\iedvtool.dll" regsvr32 /s iepeers.dll REM Symptom: IE8 closes immediately on launch, missing from IE7 regsvr32 /s "%ProgramFiles(x86)%\internet explorer\ieproxy.dll" REM no install point anymore REM regsvr32 /s /i iesetup.dll REM no reg point anymore REM regsvr32 /s imgutil.dll regsvr32 /s /i /n inetcpl.cpl REM no install point anymore REM regsvr32 /s /i inseng.dll regsvr32 /s jscript.dll REM license manager regsvr32 /s licmgr10.dll REM regsvr32 /s msapsspc.dll REM regsvr32 /s mshta.exe REM VS debugger regsvr32 /s msdbg2.dll REM no install point anymore REM regsvr32 /s /i mshtml.dll regsvr32 /s mshtmled.dll regsvr32 /s msident.dll REM no reg point anymore REM regsvr32 /s msrating.dll REM multimedia timer regsvr32 /s mstime.dll REM no install point anymore REM regsvr32 /s /i occache.dll REM process debug manager regsvr32 /s "%ProgramFiles(x86)%\internet explorer\pdm.dll" REM no reg point anymore REM regsvr32 /s pngfilt.dll REM regsvr32 /s /i setupwbv.dll (not there anymore!) regsvr32 /s tdc.ocx regsvr32 /s /i urlmon.dll REM regsvr32 /s /i urlmon.dll,NI,HKLM regsvr32 /s vbscript.dll REM VML renderer regsvr32 /s "%CommonProgramFiles%\microsoft shared\vgx\vgx.dll" REM no install point anymore REM regsvr32 /s /i webcheck.dll regsvr32 /s /i /n wininet.dll REM ****************************** echo registering system files REM additional system dlls known to be used by IE REM added 11.05.2006 Symptom: Add-Ons-Manager menu entry is present but nothing happens regsvr32 /s extmgr.dll REM added 12.05.2006 Symptom: Javascript links don't work (Robin Walker) .NET hub file regsvr32 /s mscoree.dll REM added 23.03.2009 Symptom: Find on this page is blank regsvr32 /s oleacc.dll REM added 24.03.2009 Symptom: Printing problems, open in new window regsvr32 /s ole32.dll REM mscorier.dll REM mscories.dll REM Symptom: open in new tab/window not working regsvr32 /s actxprxy.dll regsvr32 /s asctrls.ocx regsvr32 /s cdfview.dll regsvr32 /s comcat.dll regsvr32 /s /i /n comctl32.dll regsvr32 /s cryptdlg.dll regsvr32 /s /i /n digest.dll regsvr32 /s dispex.dll regsvr32 /s hlink.dll regsvr32 /s mlang.dll regsvr32 /s mobsync.dll regsvr32 /s /i msieftp.dll REM regsvr32 /s msnsspc.dll #no entry point regsvr32 /s msr2c.dll regsvr32 /s msxml.dll regsvr32 /s oleaut32.dll REM regsvr32 /s plugin.ocx #no entry point regsvr32 /s proctexe.ocx REM plus DllRegisterServerEx ExA ExW ... ? regsvr32 /s /i scrobj.dll REM shdocvw.dll hasn't been updated for IE7 and IE8, it still registers itself for the Windows Internet Controls regsvr32 /s /i shdocvw.dll regsvr32 /s sendmail.dll REM ****************************** REM PKI/crypto functionality REM initpki can take very long to run and is rarely a problem REM if there are problems with crypto, SSL, certificates REM remove the three following REMs from the lines REM echo We are almost done except one crypto file REM echo but this will take very long, be patient! REM regsvr32 /s /i:A initpki.dll REM ****************************** REM tabbed browser, do at the end, why originally with /n ? regsvr32 /s /i ieframe.dll REM ****************************** echo correcting bugs in the registry REM do some corrective work REM Symptom: new tabs page cannot display content because it cannot access the controls (added 27. 3.2009) REM This is a result of a bug in shdocvw.dll (see above), probably only on Windows XP reg add "HKCR\TypeLib\{EAB22AC0-30C1-11CF-A7EB-0000C05BAE0B}\1.1\0\win32" /ve /t REG_SZ /d %systemroot%\system32\ieframe.dll /f REM ****************************** echo all tasks have been finished echo. pause 2 - Close all your IE windows and processes. 3 - Save your document on your Desktop by example, with the .bat extension. Right-click on it, and select "Run as administrator". 4 - Test if this tip resolved your issue by openning IE. If you use a Windows 32 bits version, please replace %ProgramFiles(x86)% by %ProgramFiles% in your .bat file.

    Read the article

  • Monitor System Resources from the Windows 7 Taskbar

    - by Asian Angel
    The problem with most system monitoring apps is that they get covered up with all of your open windows, but you can solve that problem by adding monitoring apps to the Taskbar. Setting Up & Using SuperbarMonitor All of the individual monitors and the .dll files necessary to run them come in a single zip file for your convenience. Simply unzip the contents, add them to an appropriate “Program Files Folder”, and create shortcuts for the monitors that you would like to use on your system. For our example we created shortcuts for all five monitors and set the shortcuts up in their own “Start Menu Folder”. You can see what the five monitors (Battery, CPU, Disk, Memory, & Volume) look like when running…they are visual in appearance without text to clutter up the looks. The monitors use colors (red, green, & yellow) to indicate the amount of resources being used for a particular category. Note: Our system is desktop-based but the “Battery Monitor” was shown for the purposes of demonstration…thus the red color seen here. Hovering the mouse over the “Battery, CPU, Disk, & Memory Monitors” on our system displayed a small blank thumbnail. Note: The “Battery Monitor” may or may not display more when used on your laptop. Going one step further and hovering the mouse over the thumbnails displayed a small blank window. There really is nothing that you will need to worry with outside of watching the color for each individual monitor. Nice and simple! The one monitor with extra features on the thumbnail was the “Volume Monitor”. You can turn the volume down, up, on, or off from here…definitely useful if you have been wanting to hide the “Volume Icon” in the “System Tray”. You can also pin the monitors to your “Taskbar” if desired. Keep in mind that if you do close any of the monitors they will “temporarily” disappear from the “Taskbar” until the next time they are started. Note: If you want the monitors to start with your system each time you will need to add the appropriate shortcuts to the “Startup Sub-menu” in your “Start Menu”. Conclusion If you have been wanting a nice visual way to monitor your system’s resources then SuperbarMonitor is definitely worth trying out. Links Download SuperbarMonitor Similar Articles Productive Geek Tips Monitor CPU, Memory, and Disk IO In Windows 7 with Taskbar MetersUse Windows Vista Reliability Monitor to Troubleshoot CrashesTaskbar Eliminator Does What the Name Implies: Hides Your Windows TaskbarBring Misplaced Off-Screen Windows Back to Your Desktop (Keyboard Trick)How To Fix System Tray Tooltips Not Displaying in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites

    Read the article

  • Improve Playback Using Enhancements in Windows Media Player 12

    - by DigitalGeekery
    Are you looking for ways to improve the playback of your media in Windows Media Player 12? We’ll show you how to do that by using the enhancements in WMP 12. If you are in Library mode, you’ll need to click the icon at the lower right to switch to Now Playing mode. Right-click anywhere in Media Player while in Now Playing mode, select Enhancements, and select any of the available options.   You can switch between the individual enhancements by clicking the right and left buttons at the top left.   Crossfading and Auto Volume Leveling The Auto Volume Leveling setting is just a simple toggle on and off. If your MP3 or WMA files have volume leveling information values.   You can automatically add volume leveling information values to all files you add to your library by switching to Library view, going to Tools > Options, and selecting Add volume leveling information values for new files on the Library tab. Click OK when finished.   Crossfading will gradually decrease the volume of the song that is ending (fade out) and increase volume of the song that is beginning. Click Turn on Crossfading and then click and drag the slider left or right change the amount of overlap between tracks. Graphic Equalizer The graphic equalizer is toggled on and off by clicking Turn on / Turn off at the top left. You can select pre-defined equalizer settings by music genre by clicking the Default list. The radio buttons on the left allow you to move the sliders individually, in a loose group or a tight group. You can always return to the default settings by clicking Reset. Play Speed Settings Choose a pre-defined settings by clicking Slow, Normal, or Fast. Uncheck the Snap slider to common speeds the move the slider right and left to your desired speed. If nothing else, these settings provide a little fun and amusement. Quiet Mode Quiet mode will level out any sharp volume highs and lows within a single track. Simply toggle the setting on or off and select whether you prefer Medium difference or Little difference by selecting one of the radio buttons. SRS WOW effects SRS WOW effects enhance low-frequency and stereo sound performance. Click Turn on to enable the TruBass and WOW Effect sliders. You can also optimize for your speaker type. Click to switch between Regular, Large, and Headphones. Video Settings Video Settings allow you to adjust the Hue, Brightness, Saturation, and Contrast.   You can also adjust the zoom settings by clicking Select video zoom settings.   Dolby Digital Settings Choose between Normal, Night, and Theater settings to adjust the audio for Dolby Digital content. This setting will only effect media with Dolby Digital sound. Looking for more ways to improve your media experience in WMP 12? Check out how to update metadata and cover art and how to share media with other Windows 7 computers on your home network. Similar Articles Productive Geek Tips Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu LinuxHow To Rip a Music CD in Windows 7 Media CenterStream Media from Windows 7 to XP with VLC Media PlayerInstalling Windows Media Player Plugin for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals Video Toolbox is a Superb Online Video Editor

    Read the article

  • MEB: Taking Incremental Backup using last successful backup

    - by Sagar Jauhari
    Introduction In MySQL Enterprise Backup v3.7.0 (MEB 3.7.0) a new option '–incremental-base' was introduced. Using this option a user can take in incremental backup without specifying the '–start-lsn' option. Description of this option can be found here. Instead of '–start-lsn' the user can provide the location of the last full backup or incremental backup using the 'dir:' prefix. MEB would extract the end LSN of this backup from the mysql.backup_history table as well as the backup_variables.txt file (for verification) to use it as the start LSN of the incremental backup. Because of popular demand, in MEB 3.7.1 the option '-incremental-base' has been extended further. The idea is to allow the user to take an incremental backup as easily as possible using the '–incremental-base' option. With the new option MEB queries the backup_history table for the last successful backup and uses its end LSN as the start LSN for the new incremental backup. It should be noted that the last successful backup is used irrespective of the location of the backup. Details A new prefix 'history:' has been introduced for the –incremental-base option and currently the only permissible value is the string "last_backup". So using the new option an incremental backup can be taken with the following command: $ mysqlbackup --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup When MEB attempts to extract the end LSN of the last successful backup from the mysql.backup_history table, it also scans the corresponding backup destination for the old backup and tries to read the meta files at this backup destination. If a valid backup still exists at the backup destination and the meta files can be read, MEB compares the end LSN found in the mysql.backup_history table with the end LSN found in the backup meta files of the old backup. Assuming that the host MySQL server is alive and mysql.backup_history can be accessed by MEB, the behaviour of MEB with respect to verification of the old end LSN can be summarized as follows: If 'BD' is the backup destination of the last successful backup in mysql.backup_history table and 'BHT' is the mysql.backup_history table if can_read_files_at_BD:     if end_lsn_found_at_BD == end_lsn_of_last_backup_in_BHT:         continue_with_backup()     else         return_with_error() else     continue_with_backup() Advantages Apart from ease of usability an important advantage of this option is that the user can do repeated incremental backups without changing the command line. This is possible using the '–with-timestamp' option along with this new option. For example, the following command $ mysqlbackup --with-timestamp --incremental --incremental-backup-dir=/media/mysqlbackup-repo/ --incremental-base=history:last_backup backup  can be used to perform successive incremental backups in the directory /media/mysqlbackup-repo . Limitations The option '--incremental-base=history:last_backup' should not be used when the user takes different kinds of concurrent backups on the same MySQL server (say different partial backups at multiple locations). should not be used after any temporary or experimental backups performed on the server (which where successful!). needs to be used with precaution since any intermediate successful backup without the –no-connection will be used as the base backup for the next incremental backup.  will give an error in case a valid backup exists at the location of the last successful backup and whose end LSN is different from that of the last successful backup found in the backup_history table. Date: 2012-06-19 HTML generated by org-mode 6.33x in emacs 23

    Read the article

  • How do I get FEATURE_LEVEL_9_3 to work with shaders in Direct3D11?

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • General Availability of Oracle E-Business Suite Plug-in 12.1.0.1.0

    - by user810030
    We are pleased to announce the General Availability of Oracle E-Business Suite Plug-in 12.1.0.1.0, an integral part of Application Management Suite for Oracle E-Business Suite. The combination of Enterprise Manager 12c Cloud Control and the Application Management Suite combines functionality that was available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle’s Real User Experience Insight product and the Configuration & Compliance capabilities to provide the most complete solution for managing Oracle E-Business Suite applications. The features that were available in the standalone management packs are now packaged into the Oracle E-Business Suite Plug-in, which is now fully certified with Oracle Enterprise Manager 12c Cloud Control. This latest plug-in extends Cloud Control with E-Business Suite specific system management capabilities and features enhanced change management support. This new release offers the following key enhancements: General: Oracle Enterprise Manager 12c Base Platform uptake: All components of the management suite are certified with Oracle Enterprise Manager 12c Cloud Control. Security: Privilege Delegation: The Oracle E-Business Suite Plug-in now extends Enterprise Manager’s privilege delegation through Sudo and PowerBroker to Oracle E-Business Suite Plug-in host targets.  Privileges and Roles for Managing Oracle E-Business Suite: This release includes new ready-to-use target and resource privileges to monitor, manage, and perform Change Management functionality.  Cloning: Named Credentials Uptake in Cloning: The Clone module transactions now let users leverage the Named Credential feature introduced in Enterprise Manager 12c, thereby passing all the benefits of Named Credentials features in Enterprise Manager to the Oracle E-Business Suite Plug-in users.  Smart Clone improvements: The new and improved Smart Clone UI supports the adding of "pre and post" custom steps to a copy of the ready-to-use cloning deployment procedure. Now a user can pass parameters to the custom steps through the interview screen of the UI as well as pass ready-to-use parameters to the custom steps.  Change Management Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes. Customization Manager: Support for longer file names: Customization Manager now handles file names up to thirty characters in length.  Patch Manager: Queuing of Patch Manager Runs: This feature allows patch runs to queue up if Patch Manager detects a specific target is in a blackout state.  Multi-node system patching: The patch run interview has been enhanced to allow Enterprise Manager Administrator to choose which nodes adpatch will run on.  New AD Administration Options: The patch run interview has been extended to include AD Administration Options "Relink Application Programs", "Generate Product Jars Files", "Generate Report Files", and "Generate Form Files".  Release Technical Details Product documentation for the plug-in is available on My Oracle Support as note 1434392.1.  The Oracle E-Business Suite Plug-in can be accessed in one of the following ways:  Fresh install  Enterprise Manager Store  Oracle Software Delivery Cloud Upgrades  Oracle Technology Network Please refer to the Application Management Pack for Oracle E-Business Suite Guide for further details.  Related Software Component Oracle Real User Experience Insight 12.1.0.0.1  Product documentation is available on Oracle Technology Network in the "Oracle Enterprise Manager 12c Release 1 (12.1) Documentation" set under the "Associated Document" tab. (http://docs.oracle.com/cd/E26370_01/index.htm)  Product may be downloaded individually from Oracle Technology Network software download page for Oracle Enterprise Manager under "Additional Enterprise Manager Downloads." (http://www.oracle.com/technetwork/oem/grid-control/downloads/index.html)  Product may also be downloaded individually from the Oracle Software Delivery Cloud. Select "Oracle Enterprise Manager" product pack, "Oracle Real User Experience Insight 12c Release 1 Media Pack for x8  Collateral Can be accessed on the Application Management Page on Oracle Technology Network

    Read the article

  • How can I best manage making open source code releases from my company's confidential research code?

    - by DeveloperDon
    My company (let's call them Acme Technology) has a library of approximately one thousand source files that originally came from its Acme Labs research group, incubated in a development group for a couple years, and has more recently been provided to a handful of customers under non-disclosure. Acme is getting ready to release perhaps 75% of the code to the open source community. The other 25% would be released later, but for now, is either not ready for customer use or contains code related to future innovations they need to keep out of the hands of competitors. The code is presently formatted with #ifdefs that permit the same code base to work with the pre-production platforms that will be available to university researchers and a much wider range of commercial customers once it goes to open source, while at the same time being available for experimentation and prototyping and forward compatibility testing with the future platform. Keeping a single code base is considered essential for the economics (and sanity) of my group who would have a tough time maintaining two copies in parallel. Files in our current base look something like this: > // Copyright 2012 (C) Acme Technology, All Rights Reserved. > // Very large, often varied and restrictive copyright license in English and French, > // sometimes also embedded in make files and shell scripts with varied > // comment styles. > > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > #ifdef UNDER_RESEARCH > holographicVisualization(on); > #endif > } And we would like to convert them to something like: > // GPL Copyright (C) Acme Technology Labs 2012, Some rights reserved. > // Acme appreciates your interest in its technology, please contact [email protected] > // for technical support, and www.acme.com/emergingTech for updates and RSS feed. > > ... Usual header stuff... > > void initTechnologyLibrary() { > nuiInterface(on); > } Is there a tool, parse library, or popular script that can replace the copyright and strip out not just #ifdefs, but variations like #if defined(UNDER_RESEARCH), etc.? The code is presently in Git and would likely be hosted somewhere that uses Git. Would there be a way to safely link repositories together so we can efficiently reintegrate our improvements with the open source versions? Advice about other pitfalls is welcome.

    Read the article

  • How does the Trash Can work, and where can I find official documentation, reference, or specification for it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always uses <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, it's annoying: if I have 15 users, I end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why does it never use it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash; it gives an error. Try to delete any file, it says "it can't move to trash". Ok, I know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why the error then? Bug? Why must I change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from the standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find a technical reference for Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what I am doing wrong, especially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for the sticky bit (VFAT, NTFS), it probably doesn't have for permissions either (at least VFAT surely doesn't). So what prevents one user from purging / restoring other users' ./Trash-xxx ? If one can read/write his own Trash, one can do the same for the whole drive, including other's trashes, correct? Or does Gnome have some kind of "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permissions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really prefer to use /.Trash/xxx format... For the root issue: for the / partition, I can use trash as root, and it goes to /root/.local/shate/Trash. But if I click on Nautilus "Trash" (as root), I get an error. Don't you? So files are correctly trashed, but I can't access it. All I can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc.). For non-/ partitions (or at least for VFAT/NTFS), I can not even use trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permanently delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really clutter the desktop and/or Nautilus. I'd rather have it pre-mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives. So, if Ubuntu/Gnome truly follows the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • How does Trash Can works? Where can i find official specification / documentation / reference about it?

    - by MestreLion
    When trying to manage trash can from mounted NTFS volumes, I ended up reading FreeDesktop.org's reference on it. Poking around and doing some tests, I realized Ubuntu/Gnome does not follow the specs 100%. Here's why: For non-/ partitions, it always use <driveroot>/.Trash-<uid>, It never used <driveroot>/.Trash/<uid>, even when i created it in advance. While this works, its annoying: if i have 15 users, i end up with 15 /.Trash-xxx folders in my drive, while the other approach would still give a single folder (with 15 sub-folders). That "pollution" in my drives is very unpleasant. And specs say "If an $topdir/.Trash directory is absent, an $topdir/.Trash-$uid directory is to be used". Well, it IS present, so why it never uses it? root trash does not work, at least not out of the box. Open nautilus as root and click on trash, it gives error. Try to delete any file, it says "it cant move to trash". Ok, i know this can be fixed by creating /root/.local/share. But specs says "A “home trash” directory SHOULD be automatically created for any new user. If this directory is needed for a trashing operation but does not exist, the implementation SHOULD automatically create it, without any warnings or delays.". Why error then? Bug? Why do i must change /etc/fstab entries for mounted volumes, adding options like uid and guid, if the volumes are already mounted as RW for everyone? These are just some examples of deviation from standard. So, the question is: "If Ubuntu does not adhere 100% to the spec, HOW exactly does the trash work? WHERE can i find technical reference about Ubuntu's implementation of the trash?" By the way: if Ubuntu does happen to follow specs, please tell me what am i doing wrong, specially regarding the /.Trash-<uid> vs /.Trash/<uid> issue. Thanks! EDIT: Some more info: If a given fs has no support for sticky bit (VFAT, NTFS), it probably dont have for permitions either (at least VFAT surely doesnt). So what prevents one user for purging / restoring other users ./Trash-xxx ? If one can read/write his own Trash, he can also do the same for the whole drive, including other's trashes, isnt it? Or does Gnome has any "extra" protection on ./Trash-xxx folders on VFAT/NTFS fs? If Linux can "emulate" file permitions on NTFS mounting by editing /fstab uid and gid options, can it also "emulate" the sticky bit? I would really want to use /.Trash/xxx format... For the root issue: for the / partition, i can trash as root, and it goes to /root/.local/shate/Trash. But if i click on Nautilus "Trash" (as root), i get an error. Dont you? So files are correctly trashed, but i cant access it. All i can do is manually "purge" them (by deleting files on /root/.local/shate/Trash), but restoring would be very tricky (opening info files and manually moving, etc) For non-/ partitions (or at least for VFAT/NTFS), I can not even trash as root: it does not create a ./Trash-0 folder, it simply says "Cannot trash, want to permantly delete?" Why? About fstab: i use it for a permanent mount for my NTFS partitions. I have several, and if not "pre-mounted" they really cluttter desktop and/or Nautilus. Id rather have it pre mounted, integrated in my fs, in mounts like /data , /windows/xp , /windows/vista , and so on, and leave /media and its "mount/unmount" flexibility just for truly removable drives Si, if Ubuntu/Gnome truly follow the spec, is there any way to fix the root issues and to "emulate" the sticky bit for (at least) my fstab'ed NTFS fixed partitions?

    Read the article

  • Can't boot in Ubuntu after windows upgrade

    - by VanceAnce
    After my lastest update for Ubuntu and Windows XP, I got a Grub error on booting the next day. ls lists the following (without () ): sd0 sd1, msdos sd2 sd5 sd6 When I tried to get into one with (sd0,xy)/ it doesn't detect system or unknown file system error. I tried to boot to a live session with a Knoppix live CD and found out that all data exists. I also tried to recover with TestDisk and it finds all systems. Here is the test disk result: Start End Size in sectors 1 * HPFS - NTFS 0 1 1 7079 254 63 113740137 2 E extended LBA 7080 0 1 12161 254 63 81642330 5 L HPFS - NTFS 7080 1 1 10266 254 63 51199092 [Schule] X extended 12031 30 1 12161 254 63 2102625 6 L Linux Swap 12031 31 33 12161 254 63 2102530 I've 1 winxp-home, 1x Ubuntu (ext3+swap) and 1 winxp prof and then I wrote on mbr with TestDisk but I always get the same errors with Grub. What should I do? I need both XP and Ubuntu. Help me please. more infos in answers below - sry for thos confusing style but im working on diff live system and browsers and have to reboot always the boot info script output is also down below maybe an advanced user can correct my fail posting - after i can solve my issuse i will register here thanx and pls help me with those weired issues ! as i still cant just comment my own answer or those on top i again has to put it here as a sepperate answer..... (or even edit - maybe an browser failur using the live cds ... cause this posti can edit) here the bootinfo script output - but the result is the same as with TestDisk ... but it looks worse - cause it also doesnt detect my old ubuntu ... but there wasnt a eares process or overwrite process visibile ending the last working session output: Boot Info Script 0.61 [1 April 2012] ============================= Boot Info Summary: =============================== = Syslinux MBR (4.04 and higher) is installed in the MBR of /dev/sda. sda1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Windows XP Boot files: /boot.ini /ntldr /NTDETECT.COM sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: According to the info in the boot sector, sda5 starts at sector 63. Operating System: Windows XP Boot files: sda6: __________________________________________ File system: swap Boot sector type: - Boot sector info: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 100.0 GB, 100030242816 bytes 255 heads, 63 sectors/track, 12161 cylinders, total 195371568 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 113,740,199 113,740,137 7 NTFS / exFAT / HPFS /dev/sda2 113,740,200 195,382,529 81,642,330 f W95 Extended (LBA) /dev/sda5 113,740,263 164,939,354 51,199,092 7 NTFS / exFAT / HPFS /dev/sda6 193,280,000 195,382,529 2,102,530 82 Linux swap / Solaris /dev/sda2 ends after the last sector of /dev/sda /dev/sda6 ends after the last sector of /dev/sda "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 6596D86768011128 ntfs /dev/sda5 1300D3B7744EC141 ntfs Schule /dev/sda6 5b95f2a1-4145-43a5-ac51-41d7dd32b213 swap ================================ Mount points: ================================= Device Mount_Point Type Options /dev/loop0 /rofs squashfs (ro,noatime) /dev/sr0 /cdrom iso9660 (ro,noatime) ================================ sda1/boot.ini: ================================ [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Home Edition" /fastdetect /NoExecute=OptOut multi(0)disk(0)rdisk(0)partition(2)\WINDOWS="Microsoft Windows XP Professional" /fastdetect [spybotsd] timeout.old=30 the last part shows that i now use the windows boot loader so that i can acces at least one OS but shouldnt i also get acces to my ubuntu partitions with live-linux-cds ? or do i have to boot with grub to get to those files only ??

    Read the article

  • SSAS: Utility to export SQL code from your cube's Data Source View (DSV)

    - by DrJohn
    When you are working on a cube, particularly in a multi-person team, it is sometimes necessary to review what changes that have been done to the SQL queries in the cube's data source view (DSV). This can be a problem as the SQL editor in the DSV is not the best interface to review code. Now of course you can cut and paste the SQL into SSMS, but you have to do each query one-by-one. What is worse your DBA is unlikely to have BIDS installed, so you will have to manually export all the SQL yourself and send him the files. To make it easy to get hold of the SQL in a Data Source View, I developed a C# utility which connects to an OLAP database and uses Analysis Services Management Objects (AMO) to obtain and export all the SQL to a series of files. The added benefit of this approach is that these SQL files can be placed under source code control which means the DBA can easily compare one version with another. The Trick When I came to implement this utility, I quickly found that the AMO API does not give direct access to anything useful about the tables in the data source view. Iterating through the DSVs and tables is easy, but getting to the SQL proved to be much harder. My Google searches returned little of value, so I took a look at the idea of using the XmlDom to open the DSV’s XML and obtaining the SQL from that. This is when the breakthrough happened. Inspecting the DSV’s XML I saw the things I was interested in were called TableType DbTableName FriendlyName QueryDefinition Searching Google for FriendlyName returned this page: Programming AMO Fundamental Objects which hinted at the fact that I could use something called ExtendedProperties to obtain these XML attributes. This simplified my code tremendously to make the implementation almost trivial. So here is my code with appropriate comments. The full solution can be downloaded from here: ExportCubeDsvSQL.zip   using System;using System.Data;using System.IO;using Microsoft.AnalysisServices; ... class code removed for clarity// connect to the OLAP server Server olapServer = new Server();olapServer.Connect(config.olapServerName);if (olapServer != null){ // connected to server ok, so obtain reference to the OLAP databaseDatabase olapDatabase = olapServer.Databases.FindByName(config.olapDatabaseName);if (olapDatabase != null){ Console.WriteLine(string.Format("Succesfully connected to '{0}' on '{1}'",   config.olapDatabaseName,   config.olapServerName));// export SQL from each data source view (usually only one, but can be many!)foreach (DataSourceView dsv in olapDatabase.DataSourceViews){ Console.WriteLine(string.Format("Exporting SQL from DSV '{0}'", dsv.Name));// for each table in the DSV, export the SQL in a fileforeach (DataTable dt in dsv.Schema.Tables){ Console.WriteLine(string.Format("Exporting SQL from table '{0}'", dt.TableName)); // get name of the table in the DSV// use the FriendlyName as the user inputs this and therefore has control of itstring queryName = dt.ExtendedProperties["FriendlyName"].ToString().Replace(" ", "_");string sqlFilePath = Path.Combine(targetDir.FullName, queryName + ".sql"); // delete the sql file if it exists... file deletion code removed for clarity// write out the SQL to a fileif (dt.ExtendedProperties["TableType"].ToString() == "View"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["QueryDefinition"].ToString());}if (dt.ExtendedProperties["TableType"].ToString() == "Table"){ File.WriteAllText(sqlFilePath, dt.ExtendedProperties["DbTableName"].ToString()); } } } Console.WriteLine(string.Format("Successfully written out SQL scripts to '{0}'", targetDir.FullName)); } }   Of course, if you are following industry best practice, you should be basing your cube on a series of views. This will mean that this utility will be of limited practical value unless of course you are inheriting a project and want to check if someone did the implementation correctly.

    Read the article

  • How to undo a changeset using tf.exe rollback

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,Team Foundation Utilities,TFS2010   Oh no! Did you just check in a changeset in to TFS and realized that you need to roll back the changeset because the changes were suppose to go in a different branch? Or did you just accidently merge a wrong changeset in your release branch? There are several ways to undo the damage, Manual: Yes, we all just hate this word but for the record you could manually rollback the changes. Get Specific version on the branch and chose the changeset prior to the one you checked in. After that check out all the files in the changeset and check them in. During the check in you will receive a conflict. At this point choose ‘Keep local changes’ in the conflict resolution window and check in the files. Automated: Yes, we just love it! TFS comes with a very powerful command line utility ‘tf.exe’ that gives you the ability to rollback the effects of one or more changesets to one or more version-controlled items. This command does not remove the changesets from an item's version history. Instead, this command creates in your workspace a set of pending changes that negate the effects of the changesets that you specify. Syntax tf rollback /toversion:VersionSpec ItemSpec [/recursive] [/lock:none|checkin|checkout] [/version:versionspec] [/keepmergehistory] [/login:username,[password]] [/noprompt] tf rollback /changeset:ChangesetFrom~ChangesetTo [ItemSpec] [/recursive] [/lock:none|checkin|checkout] [/version:VersionSpec] [/keepmergehistory] [/noprompt] [/login:username,[password]]   I’ll explain this with an example. Your workspace is at the location C:\myWorkspace You want to rollback changeset # 145621 C:\Workspace\MyBranch>tf.exe rollback /changeset:145621 /recursive How do i rollback/undo a series of changesets? You can also rollback a range of changesets by using the following C:\Workspace\MyBranch>tf.exe rollback /changeset:145601~145621 /recursive This will check out the files in the version control and you should be able to see them in the pending changes. Go on check them in to undo the specific changeset that you just rolled back. Do you completely want to get rid of the changeset from all future merges between the two branches? /KeepMergeHistory: This option has an effect only if one or more of the changesets that you are rolling back include a branch or merge change. Specify this option if you want future merges between the same source and the same target to exclude the changes that you are rolling back. Errors “If you get the message ‘Unable to determine the workspace.’ You may be able to correct this by running ‘tf worksapces /collection:TeamProjectCollectionUrl’” you are in the wrong directory. Make sure that you run the ‘tf rollback’ command from the directory of your workspace.   Status Exit Code Description 0 The operation rolled back all items successfully. 1 The operation rolled back at least one item successfully but could not roll back one or more items. 100 The operation could not roll back any items.   To use the command you must have the Read, Check Out, and Check In permissions set to Allow. So, have you been in a rollback undo situation before?   Share this post :

    Read the article

  • Open a File Browser From Your Current Command Prompt/Terminal Directory

    - by The Geek
    Ever been doing some work at the command line when you realized… it would be a lot easier if I could just use the mouse for this task? One command later, you’ll have a window open to the same place that you’re at. This same tip works in more than one operating system, so we’ll detail how to do it in every way we know how. Open a File Browser in Windows We’ve actually covered this before when we told you how to open an Explorer window from the command prompt’s current directory, but we’ll briefly review: Just type the follow command into your command prompt: explorer . Note: You could actually just type “start .” instead. And you’ll then see a file browsing window set to the same directory you were previous at. And yes, this screenshot is from Vista, but it works the same in every version of Windows. If that wasn’t good enough, you should really read how you can navigate in the File Open/Save dialogs with just the keyboard—now that’s a Stupid Geek Trick! Open a File Browser in Linux For this exercise, we’re going to assume that you’re using Gnome under a Linux flavor like Ubuntu, because that’s the most common. From your terminal window, just type in the following command: nautilus . And the next thing you know, you’ll have a file browser window open at the current location. You’ll see some type of error message at the prompt, but you can pretty much ignore that. You can also use “gnome-open .” if you want. Open Finder in Mac OS X All the Mac computers in this office are running Linux, so we haven’t had a chance to verify, but you should be able to use the following command on OS X to open Finder in the current terminal location: open . Open Dolphin on Linux KDE4 dolphin . Got any extra tips to help out your fellow readers? How do you do the same thing in KDE3? What about OS X? Leave your savvy advice in the comments, and maybe we’ll update the article. Or not. Either way, it’ll help somebody! Similar Articles Productive Geek Tips Keyboard Ninja: Concatenate Multiple Text Files in WindowsStupid Geek Tricks: Open an Explorer Window from the Command Prompt’s Current DirectoryHow to automate FTP uploads from the Windows Command LineShell Geek: Rename Multiple Files At OnceAdd "Open with gedit" to the right click menu in Ubuntu TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Home Networks – How do they look like & the problems they cause Check Your IMAP Mail Offline In Thunderbird Follow Finder Finds You Twitter Users To Follow Combine MP3 Files Easily QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon

    Read the article

  • Part 6: Extensions vs. Modifications

    - by volker.eckardt(at)oracle.com
    Customizations = Extensions + Modifications In the EBS terminology, a customization can be an extension or a modification. Extension means that you mainly create your own code from scratch. You may utilize existing views, packages and java classes, but your code is unique. Modifications are quite different, because here you take existing code and change or enhance certain areas to achieve a slightly different behavior. Important is that it doesn't matter if you place your code at the same or at another place – it is a modification. It is also not relevant if you leave the original code enabled or not! Why? Here is the answer: In case the original code piece you have taken as your base will get patched, you need to copy the source again and apply all your changes once more. If you don't do that, you may get different results or write different data compared to the standard – this causes a high risk! Here are some guidelines how to reduce the risk: Invest a bit longer when searching for objects to select data from. Rather choose a view than a table. In case Oracle development changes the underlying tables, the view will be more stable and is therefore a better choice. Choose rather public APIs over internal APIs. Same background as before: although internal structure might change, the public API is more stable. Use personalization and substitution rather than modification. Spend more time to check if the requirement can be covered with such techniques. Build a project code library, avoid that colleagues creating similar functionality multiple times. Otherwise you have to review lots of similar code to determine the need for correction. Use the technique of “flagged files”. Flagged files is a way to mark a standard deployment file. If you run the patch analyse (within Application Manager), the analyse result will list flagged standard files in case they will be patched. If you maintain a cross reference to your own CEMLIs, you can easily determine which CEMLIs have to be reviewed. Implement a code review process. This can be done by utilizing team internal or external persons. If you implement such a team internal process, your team members will come up with suggestions how to improve the code quality by themselves. Review heavy customizations regularly, to identify options to reduce complexity; let's say perform this every 6th month. You may not spend days for such a review, but a high level cross check if the customization can be reduced is suggested. De-install customizations which are no more required. Define a process for this. Add a section into the technical documentation how to uninstall and what are possible implications. Maintain a cross reference between CEMLIs and between CEMLIs, EBS modules and business processes. Keep this list up to date! Share this list! By following these guidelines, you are able to improve product stability. Although we might not be able to avoid modifications completely, we can give a much better advise to developers and to our test team. Summary: Extensions and Modifications have to be handled differently during their lifecycle. Modifications implicate a much higher risk and should therefore be reviewed more frequently. Good cross references allow you to give clear advise for the testing activities.

    Read the article

  • The gestures of Windows 8 (Consumer preview): part 2, More about Search

    - by Laurent Bugnion
    This is part 2 of a multipart blog post about the gestures and shortcuts in Windows 8 consumer preview. Part 1 can be found here! More about the Search charm In the first installment of this series, we talked about the charms and mentioned a few gestures to display the Search charm. Search is a very central and powerful feature in Windows 8, and allows you to search in Apps, Settings, Files and within Metro applications that support the Search contract. There are a few cool features around the Search, and especially the applications associated to it. I already mentioned the keyboard shortcuts you can use: Win-C shows the Charms bar (same as swiping from the right bevel towards the center of the screen). Win-Q open the Search fly out with Apps preselected. Win-W open the Search fly out with Settings preselected. Win-F open the Search fly out with Files preselected. Searching in Metro apps In addition to these three search domains, you can also search a Metro app, as long as it supports the Search contract (check this Build video to learn more about the Search contract). These apps show up in the Search flyout as shown here: Notice the list of apps below the Files button? That’s what we are talking about. First of all, the list order changes when you search in some applications. For instance, in the image above, I had used the Store with the Search charm. This is why the store shows up as the first app. I am not 100% what algorithm is used here (sorting according to number of searches is my guess), but try it out and try to figure it out Applications that have never been searched are sorted alphabetically. Does it mean we will see cool app names like ___AAA_MyCoolApp? I certainly hope not!! Pinning You can also pin often used apps to the Search flyout. To pin an app with the mouse, right click on it in the Search flyout and select Pin from the context menu. With the keyboard, use the arrow keys to go down to the selected app, and then open the context menu. With the finger, simply tap and hold until you see a semi transparent rectangle indicating that the context menu will be shown, then release. The context menu opens up and you can select Pin. Pin context menu Pinned apps Unpinning, Hiding Using the same technique as for pinning here above, you can also unpin a pinned application. Finally, you can also choose to hide an app from the Search flyout altogether. This is a convenient way to clean up and make it easy to find stuff. Note: At this point, I am not sure how to re-add a hidden app to the Search flyout. If anyone knows, please mention it in the comments, thanks! Reordering You can also reorder pinned apps. To do this, with the finger, tap, hold and pull the app to the side, then pull it vertically to reorder it. You can also reorder with the mouse, simply by clicking on an app and pulling it vertically to the place you want to put it. I don’t think there is a way to do that with the keyboard though. That’s it for now More gestures will follow in a next installment! Have fun with Windows 8   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • Getting FEATURE_LEVEL_9_3 to work in DX11

    - by Dominic
    Currently I'm going through some tutorials and learning DX11 on a DX10 machine (though I just ordered a new DX11 compatible computer) by means of setting the D3D_FEATURE_LEVEL_ setting to 10_0 and switching the vertex and pixel shader versions in D3DX11CompileFromFile to "vs_4_0" and "ps_4_0" respectively. This works fine as I'm not using any DX11-only features yet. I'd like to make it compatible with DX9.0c, which naively I thought I could do by changing the feature level setting to 9_3 or something and taking the vertex/pixel shader versions down to 3 or 2. However, no matter what I change the vertex/pixel shader versions to, it always fails when I try to call D3DX11CompileFromFile to compile the vertex/pixel shader files when I have D3D_FEATURE_LEVEL_9_3 enabled. Maybe this is due to the the vertex/pixel shader files themselves being incompatible for the lower vertex/pixel shader versions, but I'm not expert enough to say. My shader files are listed below: Vertex shader: cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; PixelInputType LightVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; // Calculate the normal vector against the world matrix only. output.normal = mul(input.normal, (float3x3)worldMatrix); // Normalize the normal vector. output.normal = normalize(output.normal); return output; } Pixel Shader: Texture2D shaderTexture; SamplerState SampleType; cbuffer LightBuffer { float4 ambientColor; float4 diffuseColor; float3 lightDirection; float padding; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; float4 LightPixelShader(PixelInputType input) : SV_TARGET { float4 textureColor; float3 lightDir; float lightIntensity; float4 color; // Sample the pixel color from the texture using the sampler at this texture coordinate location. textureColor = shaderTexture.Sample(SampleType, input.tex); // Set the default output color to the ambient light value for all pixels. color = ambientColor; // Invert the light direction for calculations. lightDir = -lightDirection; // Calculate the amount of light on this pixel. lightIntensity = saturate(dot(input.normal, lightDir)); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseColor * lightIntensity); } // Saturate the final light color. color = saturate(color); // Multiply the texture pixel and the final diffuse color to get the final pixel color result. color = color * textureColor; return color; }

    Read the article

  • ODEE Green Field (Windows) Part 3 - SOA Suite

    - by AndyL-Oracle
     So you're still here, are you? I'm sure you're probably overjoyed at the prospect of continuing with our green field installation of ODEE. In my previous post, I covered the installation of WebLogic - you probably noticed, like I did, that it's a pretty quick install. I'm pretty certain this had everything to do with how quickly the next post made it to the internet! So let's dig in. Make sure you've followed the steps from the initial post to obtain the necessary software and prerequisites! Unpack the RCU (Repository Creation Utility). This ZIP file contains a directory (rcuHome) that should be extracted into your ORACLE_HOME. Run the RCU – execute rcuHome/bin/rcu.bat. Click Next. Select Create and click Next. Enter the database connection details and click Next – any failure to connection will show in the Messages box. Click Ok Expand and select the SOA Infrastructure item. This will automatically select additional required components. You can change the prefix used, but DEV is recommended. If you are creating a sandbox that includes additional components like WebCenter Content and UMS, you may select those schemas as well but they are not required for a basic ODEE installation. Click Next. Click OK. Specify the password for the schema(s). Then click Next. Click Next. Click OK. Click OK. Click Create. Click Close. Unpack the SOA Suite installation files into a single directory e.g. SOA. Run the installer – navigate and execute SOA/Disk1/setup.exe. If you receive a JDK error, switch to a command line to start the installer. To start the installer via command line, do Start?Run?cmd and cd into the SOA\Disk1 directory. Run setup.exe –jreLoc < pathtoJRE >. Ensure you do not use a path with spaces – use the ~1 notation as necessary (your directory must not exceed 8 characters so “Program Files” becomes “Progra~1” and “Program Files (x86)” becomes “Progra~2” in this notation). Click Next. Select Skip and click Next. Resolve any issues shown and click Next. Verify your oracle home locations. Defaults are recommended. Click Next. Select your application server. If you’ve already installed WebLogic, this should be automatically selected for you. Click Next. Click Install. Allow the installation to progress… Click Next. Click Finish. You can save the installation details if you want. That should keep you satisfied for the moment. Get ready, because the next posts are going to be meaty! 

    Read the article

  • Library order is important

    - by Darryl Gove
    I've written quite extensively about link ordering issues, but I've not discussed the interaction between archive libraries and shared libraries. So let's take a simple program that calls a maths library function: #include <math.h int main() { for (int i=0; i<10000000; i++) { sin(i); } } We compile and run it to get the following performance: bash-3.2$ cc -g -O fp.c -lm bash-3.2$ timex ./a.out real 6.06 user 6.04 sys 0.01 Now most people will have heard of the optimised maths library which is added by the flag -xlibmopt. This contains optimised versions of key mathematical functions, in this instance, using the library doubles performance: bash-3.2$ cc -g -O -xlibmopt fp.c -lm bash-3.2$ timex ./a.out real 2.70 user 2.69 sys 0.00 The optimised maths library is provided as an archive library (libmopt.a), and the driver adds it to the link line just before the maths library - this causes the linker to pick the definitions provided by the static library in preference to those provided by libm. We can see the processing by asking the compiler to print out the link line: bash-3.2$ cc -### -g -O -xlibmopt fp.c -lm /usr/ccs/bin/ld ... fp.o -lmopt -lm -o a.out... The flag to the linker is -lmopt, and this is placed before the -lm flag. So what happens when the -lm flag is in the wrong place on the command line: bash-3.2$ cc -g -O -xlibmopt -lm fp.c bash-3.2$ timex ./a.out real 6.02 user 6.01 sys 0.01 If the -lm flag is before the source file (or object file for that matter), we get the slower performance from the system maths library. Why's that? If we look at the link line we can see the following ordering: /usr/ccs/bin/ld ... -lmopt -lm fp.o -o a.out So the optimised maths library is still placed before the system maths library, but the object file is placed afterwards. This would be ok if the optimised maths library were a shared library, but it is not - instead it's an archive library, and archive library processing is different - as described in the linker and library guide: "The link-editor searches an archive only to resolve undefined or tentative external references that have previously been encountered." An archive library can only be used resolve symbols that are outstanding at that point in the link processing. When fp.o is placed before the libmopt.a archive library, then the linker has an unresolved symbol defined in fp.o, and it will search the archive library to resolve that symbol. If the archive library is placed before fp.o then there are no unresolved symbols at that point, and so the linker doesn't need to use the archive library. This is why libmopt needs to be placed after the object files on the link line. On the other hand if the linker has observed any shared libraries, then at any point these are checked for any unresolved symbols. The consequence of this is that once the linker "sees" libm it will resolve any symbols it can to that library, and it will not check the archive library to resolve them. This is why libmopt needs to be placed before libm on the link line. This leads to the following order for placing files on the link line: Object files Archive libraries Shared libraries If you use this order, then things will consistently get resolved to the archive libraries rather than to the shared libaries.

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • Advice on refactoring PHP Project

    - by b0x
    I have a small SAS ERP that was written some years ago using PHP. At that time, it didn't use any framework, but the code isn't a mess. Nowadays, the project grows and I’m now working with 3 more programmers. Often, they ask to me why we don’t migrate to a framework such as Laravel. Although I'd love trying Laravel, I’m a small business and I don't have time nor money to stop and spend a whole year building everything from scratch. I need to live and pay the bills. So, I've read a lot about this matter, and I decided that doing a refactoring is the best way to do it. Also, I'm not so sure that a framework will make things easy. Business goals are: Make the code easier to new hired programmers Separate the "view", in order to: release different versions of this product (using the same code), but under different brands and websites at the minimum cost (just changing view) release different versions to fit mobile/tablet. Make different types of this product, selling packages as if they were plugins. Develop custom packages for some costumers (like plugins/addon's that they can buy to put on the main application). Code goals: Introduce best pratices, standards for everyone Try to build my own MVC structure Improve validation of data/forms (today they are mixed in both ajax and classes) Create automated testing routines for quality assurance. My current structure project: class\ extra\ hd\ logs\ public_html\ public_html\includes\ public_html\css|js|images\ class\ There are three types of classes. They are all “autoloaded” with something similar with PSR-0, but I don’t use namespaces. 1. class.Something.php Connects to Database using specific methods. I.e: Costumer-list(); It uses “class.Db.php”, that it’s an abstraction of mysql on every method. 2. class.SomethingProc.php Do things that “join” things that come from “class.Something.php”. Like IF/ELSE, math operations. 3. class.SomethingHTML.php The classes with “HTML” suffix implements only static methods and HTML code only. A real life example: All the programmers need to use $cSomething ($c to class) and $arrSomething (to array). Costumer.php (view) <?php $cCosumter = new Costumer(); $arrCostumer = $cCostumer->list(); echo CostumerHTML::table($arrCostumer); ?> Extra\ Store 3rdparty projects/classes from others, such MPDF, PHPMailer, etc. Hd\ Store user’s files outsite wwwroot dir. Logs\ Store phplogs and the system itself logs (We have a static Log::error() method, that we put in every method of every class) Public_html\ Stores the files that people use. Public_html\includes\ Store the main “config.php” file and all files that do “ajax things” ajax.Costumer.php, for example. Help is needed ;) So, as you can see we have some standards, and also for database things. But I want to write a manual of our rules. Something that I can give to any new programmer at my company and he can go on. This is not totally a mess, but it could be better seeing the new practices. What could I do to separate this as MVC, to have multiple views. Could you give me some tips considering my goals? Keep im mind the different products/custom things for specific costumers without breaking the main application. URL for tutorials, books, etc, would be nice.

    Read the article

  • I am not able to delete a corrupt NTFS partition on my pen drive. How can I force its deletion?

    - by yesuraj
    I formatted my 16GB pen drive with the NTFS file system in windows vista. After that I started copying some files. However, only a few files were copied to the pen drive before the copy operation hung. So I cancelled the copy operation. Now I am unable to use the pen drive. I DON'T REALLY NEED ANY FILES THAT I COPIED TO THE PENDRIVE. I JUST WANT TO USE THE PENDRIVE AGAIN. I have tried using Ubuntu to format the pen drive. But when i use fdisk to delete the partition, it looks like it is working fine but in fact it does not delete the partition. Also I am unable to format it with any other file system. When I tried to use gparted, it throws the following error: Error mounting: mount exited with exit code 14: The disk contains an unclean file system(0,0). The file system wasn't safely closed on window. Fixing ntfs_attr_pread_i:ntfs_pread failed: Input/output error Failed to read NTFS$Bitmap:Input/output error NTFS is either inconsistent, or there is a hardware fault, or it's a softRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows then reboot into windows twice. The usage of the /f parameter is very important!. If the device is a SoftRAID/FakeRAID then first activate it and mount a different device under the /dev/mapper directory, (e.g. /dev/mapper/nvidia_eahaabcc1). Please see the dmraid documentation for more details When I searched the Internet I found help on how to recover. But I don’t want to recover, I want to format it again. When I pressed w after deleting the partition, it took more time than previously. After that i removed the pen drive and re-inserted, but the partition I had deleted was still present. If I simply type the command fdisk /dev/sdb without removing the pen drive after the partition is deleted, then it returns the error message Unable to open /dev/sdb. Here are the steps that I followed: root@yesuraj-ubuntu:~# fdisk /dev/sdb Command (m for help): d Selected partition 1 Command (m for help): w The partition table has been altered! Calling ioctl() to re-read partition table. Syncing disks THE DEMESG PRINTS ARE AS FOLLOWS, [ 6139.774753] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6154.816941] usb 2-1.3: device descriptor read/64, error -110 [ 6169.968908] usb 2-1.3: device descriptor read/64, error -110 [ 6170.158427] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6185.200638] usb 2-1.3: device descriptor read/64, error -110 [ 6200.352572] usb 2-1.3: device descriptor read/64, error -110 [ 6200.542093] usb 2-1.3: reset high speed USB device number 4 using ehci_hcd [ 6205.559460] usb 2-1.3: device descriptor read/8, error -110 I used the dd command and it erased the partition table. But now when I connect the pen drive, dmesg contains this error message: [88143.437001] sdb: unknown partition table. I am not able to create a partion using fdisk /dev/sdb. The error message says that it is unable to find the node. Other messages from dmesg follow below. [87100.531596] usb 2-1.3: new high speed USB device number 39 using ehci_hcd [87130.915257] usb 2-1.3: new high speed USB device number 40 using ehci_hcd [87135.932647] usb 2-1.3: device descriptor read/8, error -110

    Read the article

  • Where to place web.xml outside WAR file for secure redirect?

    - by Silverhalide
    I am running Tomcat 7 and am deploying a bunch of applications delivered to me by a third party as WAR files. I'd like to force some of those apps to always use SSL. (All the "SSL" apps are in one service; other apps outside this discussion are in another service.) I've figured out how to use conf\web.xml to redirect apps from HTTP to HTTPS, but that applies to all applications hosted by Tomcat. I've also figured out how to put web.xml in an unpacked app's web-inf directory; that does the trick for that specific app, but runs the risk of being overwritten if our vendor gives us a new war file to deploy. I've also tried placing the web.xml file in various places under conf\service\host, or under appbase, but none seem to work. Is it possible to redirect some apps to SSL without forcing all apps to redirect, or to put the web.xml file inside the extracted WAR file? Here's my server.xml: <Service name="secure"> <Connector port="80" connectionTimeout="20000" redirectPort="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css"/> <Connector port="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css" scheme="https" secure="true" SSLEnabled="true" sslProtocol="TLS" keystoreFile="..." keystorePass="..." keystoreType="PKCS12" truststoreFile="..." truststorePass="..." truststoreType="JKS" clientAuth="false" ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA"/> <Engine name="secure" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> <Service name="mutual-secure"> ... </Service> The content of the web.xml files I'm playing with is: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <security-constraint> <web-resource-collection> <web-resource-name>All applications</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <description>Redirect all requests to HTTPS</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> (For conf\web.xml the security-constraint is added just before the end of the existing file, rather than create a new file.) My webapps directory (currently) contains only the WAR files.

    Read the article

  • Eclipse Hell . . . Failed to read the project description file (.project)

    - by Kobojunkie
    I think Eclipse is trying to make me miserable. A couple of hours ago, my project was working and compiling well. Suddenly that all changed. Eclipse somehow wipes out all changes I have made to my files(activity, manifest etc.) I make sure to save often but when I go to run the project, I get the error that I have a build error. I checked and there was none, so I go to close Eclipse, so I can reopen and see if the errors will go away. Instead what happens is Eclipse wipes clean all my files and I end up with a project on disk with lots of blank code files. I try to run anyway, and I get the error message below. Failed to read the project description file (.project) for 'com.example.android.nfc.simulator.FakeTagsActivity.FakeTagsActivity'. The file has been changed on disk, and it now contains invalid information. The project will not function properly until the description file is restored to a valid state. Anyone have an idea what in the world this is about and how I can rectify this?

    Read the article

  • using ILMerge with .NET 4 libraries

    - by Sarah Vessels
    I'm having trouble using ILMerge in my post-build after upgrading from .NET 3.5/Visual Studio 2008 to .NET 4/Visual Studio 2010. I have a Solution with several projects whose target framework is set to ".NET Framework 4". I use the following ILMerge command to merge the individual project DLLs into a single DLL: if not $(ConfigurationName) == Debug if exist "C:\Program Files (x86)\Microsoft\ILMerge\ILMerge.exe" "C:\Program Files (x86)\Microsoft\ILMerge\ILMerge.exe" /lib:"C:\Windows\Microsoft.NET\Framework64\v4.0.30319" /lib:"C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies" /keyfile:"$(SolutionDir)$(SolutionName).snk" /targetplatform:v4 /out:"$(SolutionDir)bin\development\$(SolutionName).dll" "$(SolutionDir)Connection\$(OutDir)Connection.dll" ...other project DLLs... /xmldocs If I leave off specifying the location of the .NET 4 framework directory, I get an "Unresolved assembly reference not allowed: System" error from ILMerge. If I leave off specifying the location of the MSTest directory, I get an "Unresolved assembly reference not allowed: Microsoft.VisualStudio.QualityTools.UnitTestFramework" error. The ILMerge command above works and produces a DLL. When I reference that DLL in another .NET 4 C# project, however, and try to use code within it, I get the following warning: The primary reference "MyILMergedDLL" could not be resolved because it has an indirect dependency on the .NET Framework assembly "mscorlib, Version=4.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" which has a higher version "4.0.65535.65535" than the version "4.0.0.0" in the current target framework. If I then remove the /targetplatform:v4 flag and try to use MyILMergedDLL.dll, I get the following error: The type 'System.Xml.Serialization.IXmlSerializable' is defined in an assembly that is not referenced. You must add a reference to assembly 'System.Xml, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089'. It doesn't seem like I should have to do that. Whoever uses my MyILMergedDLL.dll API should not have to add references to whatever libraries it references. How can I get around this?

    Read the article

  • Building a VS2010 solution from TFS2008

    - by slugster
    I have a TFS 2008 Build Agent that has been used to build .Net 3.5 applications. I now have a .Net 4.0 app which i want to compile on the same build agent. I have ensured that MSBuild 4.0 is installed on there and all the required componentry is also installed, but i am getting the following MSB4062 error when building: [Any CPU/Release] C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.targets(244,5): error MSB4062: The "Microsoft.WebApplication.Build.Tasks.GetSilverlightItemsFromProperty" task could not be loaded from the assembly C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll. Could not load file or assembly 'file:///C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications\Microsoft.WebApplication.Build.Tasks.dll' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. Confirm that the declaration is correct, and that the assembly and all its dependencies are available. I am presuming that i get this because the TFSBuild.proj gets executed by MSBuild 3.5 which in turn means my solution is compiled with MSBuild 3.5. Am i correct with my diagnosis? Is there any way to ensure that TFS2008 uses MSBuild 4.0 for my solution? Can it be done on a single team project so that it doesn't affect any other team projects being built on the same build agent? Note that i have checked the question Build failing - VS2010 solution on TFS2008 and this is not a duplicate. Thanks :)

    Read the article

< Previous Page | 583 584 585 586 587 588 589 590 591 592 593 594  | Next Page >