Search Results

Search found 7914 results on 317 pages for 'valid xhtml'.

Page 284/317 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Responsible BI for Excel, Even for Older Versions

    - by andrewbrust
    On Wednesday, I will have the honor of co-presenting, for both The Data Warehouse Institute (TDWI) and the New York Technology Council. on the subject of Excel and BI. My co-presenter will be none other than Bill Baker, who was a Microsoft Distinguished Engineer and, essentially, the father of BI at that company.  Details on the events are here and here. We'll be talking about PowerPivot, of course, but that's not all. Probably even more important than any one product, will be our discussion of whether the usual characterization of Excel as the nemesis of IT, the guilty pleasure of business users and the antithesis of formal BI is really valid and/or hopelessly intractable. Without giving away our punchline, I'll tell you that we are much more optimistic than that. There are huge upsides to Excel and while there are real dangers to using it in the BI space, there are standards and practices you can employ to ensure Excel is used responsibly. And when those practices are followed, Excel becomes quite powerful indeed. One of the keys to this is using Excel as a data consumer rather than data storage mechanism. Caching data in Excel is OK, but only if that data is (a) not modified and (b) configured for automated periodic refresh. PowerPivot meets both criteria -- it stores a read-only copy of your data in the form of a model, and once workbook containing a PowerPivot model is published to SharePoint, it can be configured for scheduled data refresh, on the server, requiring no user intervention whatsoever. Data refresh is a bit like hard drive backup: it will only happen reliably if it's automated, and super-easy to configure. PowerPivot hits a real home run here (as does Windows Home Server for PC backup, but I digress). The thing about PowerPivot is that it's an add-in for Excel 2010. What if you're not planning to go to that new version for quite a while? What if you’ve just deployed Office 2007 in your organization? What if you're still on Office 2003, or an even earlier version? What can you do immediately to share data responsibly and easily? As it turns out, there's a feature in Excel that's been around for quite a while, that can help: Web Queries.  The Web Query feature was introduced, ostensibly, to allow Excel to pull data in from Internet Web pages…for example, data in a stock quote history table will come in nicely, as will any data in a Web page that is displayed in an HTML table.  To use the feature In Excel 2007 or 2010, click the Data Tab or the ribbon and click the “From Web” button towards the left; in older versions use the corresponding option in  the menu or  toolbars.  Next, paste a URL into the resulting dialog box and tap Enter or click the Go button.  A preview of the Web page will come up, and the dialog will allow you to select the specific table within the page whose data you’d like to import.  Here’s an example: Now just click the table, click the Import button, and the Import Data dialog appears.  You can simply click OK to bring in your data or you can first click the Properties… button and configure the data import to be refreshed at an interval in minutes that you select.  Now your data’s in the spreadsheet and ready to worked with: Your data may be vulnerable to modification, but if you’ve set up the data refresh, any accidental or malicious changes will be corrected in time anyway. The thing about this feature is that it’s most useful not for public Web pages, but for pages behind the firewall.  In effect, the Web Query feature provides an incredibly easy way to consume data in Excel that’s “published” from an application.  Users just need a URL.  They don’t need to know server and database names and since the data is read-only, providing credentials may be unnecessary, or can be handled using integrated security.  If that’s not good enough, the Web Query can be saved to a special .iqy file, which can be edited to provide POST parameter data. The only requirement is that the data must be provided in an HTML table, with the first row providing the column names.  From an ASP.NET project, it couldn’t be easier: a simple bound GridView control is totally compatible.  Use a data source control with it, and you don’t even have to write any code.  Users can link to pages that are part of an application’s UI, or developers can create pages that are specially designed for the purpose of providing an interface to the Web Query import feature.  And none of this is Microsoft- or .NET-specific.  You can create pages in any language you want (PHP comes to mind) that output the result set of a query in HTML table format, and then consume that data in a Web Query.  Then build PivotTables and charts on the data, and in Excel 2007 or 2010 you can use conditional formatting to create scorecards and dashboards. This strategy allows you to create pages that function quite similarly to the OData XML feeds rendered when .NET developers create an “Astoria” WCF Data Service.  And while it’s cool that PowerPivot and Excel 2010 can import such OData feeds, it’s good to know that older versions of Excel can function in a similar fashion, and can consume data produced by virtually any Web development platform. As a final matter, instead of just telling you that “older versions” of Excel support this feature, I’ll be more specific.  To discover what the first version of Excel was to support Web queries, go to http://bit.ly/OldSchoolXL.

    Read the article

  • Is Linear Tape File System (LTFS) Best For Transportable Storage?

    - by rickramsey
    Those of us in tape storage engineering take a lot of pride in what we do, but understand that tape is the right answer to a storage problem only some of the time. And, unfortunately for a storage medium with such a long history, it has built up a few preconceived notions that are no longer valid. When I hear customers debate whether to implement tape vs. disk, one of the common strikes against tape is its perceived lack of usability. If you could go back a few generations of corporate acquisitions, you would discover that StorageTek engineers recognized this problem and started developing a solution where a tape drive could look just like a memory stick to a user. The goal was to not have to care about where files were on the cartridge, but to simply see the list of files that were on the tape, and click on them to open them up. Eventually, our friends in tape over at IBM built upon our work at StorageTek and Sun Microsystems and released the Linear Tape File System (LTFS) feature for the current LTO5 generation of tape drives as an open specification. LTFS is really a wonderful feature and we’re proud to have taken part in its beginnings and, as you’ll soon read, its future. Today we offer LTFS-Open Edition, which is free for you to use in your in Oracle Enterprise Linux 5.5 environment - not only on your LTO5 drives, but also on your Oracle StorageTek T10000C drives. You can download it free from Oracle and try it out. LTFS does exactly what its forefathers imagined. Now you can see immediately which files are on a cartridge. LTFS does this by splitting a cartridge into two partitions. The first holds all of the necessary metadata to create a directory structure for you to easily view the contents of the cartridge. The second partition holds all of the files themselves. When tape media is loaded onto a drive, a complete file system image is presented to the user. Adding files to a cartridge can be as simple as a drag-and-drop just as you do today on your laptop when transferring files from your hard drive to a thumb drive or with standard POSIX file operations. You may be thinking all of this sounds nice, but asking, “when will I actually use it?” As I mentioned at the beginning, tape is not the right solution all of the time. However, if you ever need to physically move data between locations, tape storage with LTFS should be your most cost-effective and reliable answer. I will give you a few use cases examples of when LTFS can be utilized. Media and Entertainment (M&E), Oil and Gas (O&G), and other industries have a strong need for their storage to be transportable. For example, an O&G company hunting for new oil deposits in remote locations takes very large underground seismic images which need to be shipped back to a central data center. M&E operations conduct similar activities when shooting video for productions. M&E companies also often transfers files to third-parties for editing and other activities. These companies have three highly flawed options for transporting data: electronic transfer, disk storage transport, or tape storage transport. The first option, electronic transfer, is impractical because of the expense of the bandwidth required to transfer multi-terabyte files reliably and efficiently. If there’s one place that has bandwidth, it’s your local post office so many companies revert to physically shipping storage media. Typically, M&E companies rely on transporting disk storage between sites even though it, too, is expensive. Tape storage should be the preferred format because as IDC points out, “Tape is more suitable for physical transportation of large amounts of data as it is less vulnerable to mechanical damage during transportation compared with disk" (See note 1, below). However, tape storage has not been used in the past because of the restrictions created by proprietary formats. A tape may only be readable if both the sender and receiver have the same proprietary application used to write the file. In addition, the workflows may be slowed by the need to read the entire tape cartridge during recall. LTFS solves both of these problems, clearing the way for tape to become the standard platform for transferring large files. LTFS is open and, as long as you’ve downloaded the free reader from our website or that of anyone in the LTO consortium, you can read the data. So if a movie studio ships a scene to a third-party partner to add, for example, sounds effects or a music score, it doesn’t have to care what technology the third-party has. If it’s written back to an LTFS-formatted tape cartridge, it can be read. Some tape vendors like to claim LTFS is a “standard,” but beauty is in the eye of the beholder. It’s a specification at this point, not a standard. That said, we’re already seeing application vendors create functionality to write in an LTFS format based on the specification. And it’s my belief that both customers and the tape storage industry will see the most benefit if we all follow the same path. As such, we have volunteered to lead the way in making LTFS a standard first with the Storage Network Industry Association (SNIA), and eventually through to standard bodies such as American National Standards Institute (ANSI). Expect to hear good news soon about our efforts. So, if storage transportability is one of your requirements, I recommend giving LTFS a look. It makes tape much more user-friendly and it’s free, which allows tape to maintain all of its cost advantages over disk! Note 1 - IDC Report. April, 2011. “IDC’s Archival Storage Solutions Taxonomy, 2011” - Brian Zents Website Newsletter Facebook Twitter

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Search like google

    - by Rajanikant
    I have a task to make a search module in which i have database users and tablename userProfile and i want to search profile when i entered text in text box for ex. if i entered "I am looking for MBA in delhi" or 'mba information in delhi' it will displayed all user registered expertise as mba and city in delhi . this will be like job portal or any social networking portal my database is -- phpMyAdmin SQL Dump -- version 2.8.1 -- http://www.phpmyadmin.net -- Host: localhost -- Generation Time: May 01, 2010 at 10:58 AM -- Server version: 5.0.21 -- PHP Version: 5.1.4 -- Database: users -- -- Table structure for table userProfile CREATE TABLE userprofile ( id int(11) NOT NULL auto_increment, name varchar(50) collate latin1_general_ci NOT NULL, expertise varchar(50) collate latin1_general_ci NOT NULL, city varchar(50) collate latin1_general_ci NOT NULL, state varchar(50) collate latin1_general_ci NOT NULL, discription varchar(500) collate latin1_general_ci NOT NULL, PRIMARY KEY (id) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COLLATE=latin1_general_ci AUTO_INCREMENT=3 ; -- -- Dumping data for table userProfile INSERT INTO userProfile VALUES (1, 'a', 'MBA HR', 'Delhi', 'Delhi', 'Fortune is top management college in Delhi, Best B-schools in India providing business studies and management training. FIIB is Delhi based most ranked ...'); INSERT INTO userProfile VALUES (2, 'b', 'MBA marketing', 'Delhi', 'Delhi', 'Fortune is top management college in Delhi, Best B-schools in India providing business studies and management training. FIIB is Delhi based most ranked ...'); and search.php page <?php include("config.php"); include("class.search.php"); $br=new search(); if($_POST['searchbutton']) { $str=$_POST['textfield']; $brstr=$br->breakkey($str); } ?> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Untitled Document</title> </head> <body> <table width="100%" border="0"> <form name="frmsearch" method="post"> <tr> <td width="367">&nbsp;</td> <td width="300"><label> <input name="textfield" type="text" id="textfield" size="50" /> </label></td> <td width="294"><label> <input type="submit" name="searchbutton" id="button" value="Search" /> </label></td> </tr></form> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> <tr> <td>&nbsp;</td> <td>&nbsp;</td> <td>&nbsp;</td> </tr> </table> </body> </html> and config.php is <?php error_reporting(E_ALL); $host="localhost"; $username="root"; $password=""; $dbname="users"; $con=mysql_connect($host,$username,$password) or die("could not connect database"); $db=mysql_select_db($dbname,$con) or die("could not select database"); ?> and class.search.php is <?php class search { function breakkey($key) { global $db; $words=explode(' ',$key); return $words; } function searchitem($perm) { global $db; foreach($perm as $k=>$v) { $sql="select * from users" } } } ?>

    Read the article

  • Zend_Nav in zend framework issue getting menu to show up

    - by Kalle Johansson
    Hi ! First time of here, so please bare with me. I have to following problem, using zend framework, specifically zend_nav to create a reusable menu to be passed in via the layout/layout.phtml page. These are the code fragments in their respective files. first of in application/configs/navigation.xml, <configdata> <nav> <label>Home</label> <controller>index</controller> <action>index</action> <pages> <add> <label>Add</label> <controller>post</controller> <action>add</action> </add> <login> <label>Admin</label> <controller>login</controller> <action>login</action> </login> </pages> </nav> </configdata> this is then passed into an object in the Bootstrap.php file(only showing that specific method) protected function __initNavigation(){ $this->bootstrap('layout'); $layout = $this->getResource('layout'); $view = $layout->getView(); $config = new Zend_Config_Xml(APPLICATION .'/config/navigation.xml', 'nav'); $container = new Zend_Navigation($config); $view->navigation($container); } and then finally in the view layout.phtml, the object should return the menu <!-- application/layouts/scripts/layout.phtml --> <?php echo $this->doctype() ?> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Zend Blog !</title> <?php echo $this->headLink()->appendStylesheet('/css/global.css')?> </head> <body> <div id="header"> <div id="header-logo"> <b>Blog Me !</b> </div> </div> <div id="menu"> <?php echo $this->navigation()->menu() ?> </div> <div id="content"> <?php echo $this->layout()->content ?> </div> </body> </html> The menu does however not show up when i start up my app in the browser, any ideas as to might have gone wrong, is humbly received. Thanks in advance Kalle Johansson

    Read the article

  • Saving images only available when logged in

    - by James
    Hi, I've been having some trouble getting images to download when logged into a website that requires you to be logged in. The images can only be viewed when you are logged in to the site, but you cannot seem to view them directly in the browser if you copy its location into a tab/new window (it redirects to an error page - so I guess the containing folder has be .htaccess-ed). Anyway, the code I have below allows me to log in and grab the HTML content, which works well - but I cannot grab the images ... this is where I need help! <? // curl.php class Curl { public $cookieJar = ""; public function __construct($cookieJarFile = 'cookies.txt') { $this->cookieJar = $cookieJarFile; } function setup() { $header = array(); $header[0] = "Accept: text/xml,application/xml,application/xhtml+xml,"; $header[0] .= "text/html;q=0.9,text/plain;q=0.8,image/gif;q=0.8,image/x-bitmap;q=0.8,image/jpeg;q=0.8,image/png,*/*;q=0.5"; $header[] = "Cache-Control: max-age=0"; $header[] = "Connection: keep-alive"; $header[] = "Keep-Alive: 300"; $header[] = "Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7"; $header[] = "Accept-Language: en-us,en;q=0.5"; $header[] = "Pragma: "; // browsers keep this blank. curl_setopt($this->curl, CURLOPT_USERAGENT, 'Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.1.7) Gecko/20070914 Firefox/2.0.0.7'); curl_setopt($this->curl, CURLOPT_HTTPHEADER, $header); curl_setopt($this->curl, CURLOPT_COOKIEJAR, $this->cookieJar); curl_setopt($this->curl, CURLOPT_COOKIEFILE, $this->cookieJar); curl_setopt($this->curl, CURLOPT_AUTOREFERER, true); curl_setopt($this->curl, CURLOPT_FOLLOWLOCATION, true); curl_setopt($this->curl, CURLOPT_RETURNTRANSFER, true); } function get($url) { $this->curl = curl_init($url); $this->setup(); return $this->request(); } function getAll($reg, $str) { preg_match_all($reg, $str, $matches); return $matches[1]; } function postForm($url, $fields, $referer = '') { $this->curl = curl_init($url); $this->setup(); curl_setopt($this->curl, CURLOPT_URL, $url); curl_setopt($this->curl, CURLOPT_POST, 1); curl_setopt($this->curl, CURLOPT_REFERER, $referer); curl_setopt($this->curl, CURLOPT_POSTFIELDS, $fields); return $this->request(); } function getInfo($info) { $info = ($info == 'lasturl') ? curl_getinfo($this->curl, CURLINFO_EFFECTIVE_URL) : curl_getinfo($this->curl, $info); return $info; } function request() { return curl_exec($this->curl); } } ?> And below is the page that uses it. <? // data.php include('curl.php'); $curl = new Curl(); $url = "http://domain.com/login.php"; $newURL = "http://domain.com/go_here.php"; $username = "user"; $password = "pass"; $fields = "user=$username&pass=$password"; // Calling URL $referer = "http://domain.com/refering_page.php"; $html = $curl->postForm($url, $fields, $referer); $html = $curl->get($newURL); echo $html; ?> I've tried putting the direct URL for the image into $newURL but that doesn't get the image - it simply returns an error saying since that folder is not available to view directly. I've tried varying the above in different ways, but I haven't been successful in getting an image, though I have managed to get a screen through basically saying error 405 and/or 406 (but not the image I want). Any help would be great!

    Read the article

  • Subterranean IL: Constructor constraints

    - by Simon Cooper
    The constructor generic constraint is a slightly wierd one. The ECMA specification simply states that it: constrains [the type] to being a concrete reference type (i.e., not abstract) that has a public constructor taking no arguments (the default constructor), or to being a value type. There seems to be no reference within the spec to how you actually create an instance of a generic type with such a constraint. In non-generic methods, the normal way of creating an instance of a class is quite different to initializing an instance of a value type. For a reference type, you use newobj: newobj instance void IncrementableClass::.ctor() and for value types, you need to use initobj: .locals init ( valuetype IncrementableStruct s1 ) ldloca 0 initobj IncrementableStruct But, for a generic method, we need a consistent method that would work equally well for reference or value types. Activator.CreateInstance<T> To solve this problem the CLR designers could have chosen to create something similar to the constrained. prefix; if T is a value type, call initobj, and if it is a reference type, call newobj instance void !!0::.ctor(). However, this solution is much more heavyweight than constrained callvirt. The newobj call is encoded in the assembly using a simple reference to a row in a metadata table. This encoding is no longer valid for a call to !!0::.ctor(), as different constructor methods occupy different rows in the metadata tables. Furthermore, constructors aren't virtual, so we would have to somehow do a dynamic lookup to the correct method at runtime without using a MethodTable, something which is completely new to the CLR. Trying to do this in IL results in the following verification error: newobj instance void !!0::.ctor() [IL]: Error: Unable to resolve token. This is where Activator.CreateInstance<T> comes in. We can call this method to return us a new T, and make the whole issue Somebody Else's Problem. CreateInstance does all the dynamic method lookup for us, and returns us a new instance of the correct reference or value type (strangely enough, Activator.CreateInstance<T> does not itself have a .ctor constraint on its generic parameter): .method private static !!0 CreateInstance<.ctor T>() { call !!0 [mscorlib]System.Activator::CreateInstance<!!0>() ret } Going further: compiler enhancements Although this method works perfectly well for solving the problem, the C# compiler goes one step further. If you decompile the C# version of the CreateInstance method above: private static T CreateInstance() where T : new() { return new T(); } what you actually get is this (edited slightly for space & clarity): .method private static !!T CreateInstance<.ctor T>() { .locals init ( [0] !!T CS$0$0000, [1] !!T CS$0$0001 ) DetectValueType: ldloca.s 0 initobj !!T ldloc.0 box !!T brfalse.s CreateInstance CreateValueType: ldloca.s 1 initobj !!T ldloc.1 ret CreateInstance: call !!0 [mscorlib]System.Activator::CreateInstance<T>() ret } What on earth is going on here? Looking closer, it's actually quite a clever performance optimization around value types. So, lets dissect this code to see what it does. The CreateValueType and CreateInstance sections should be fairly self-explanatory; using initobj for value types, and Activator.CreateInstance for reference types. How does the DetectValueType section work? First, the stack transition for value types: ldloca.s 0 // &[!!T(uninitialized)] initobj !!T // ldloc.0 // !!T box !!T // O[!!T] brfalse.s // branch not taken When the brfalse.s is hit, the top stack entry is a non-null reference to a boxed !!T, so execution continues to to the CreateValueType section. What about when !!T is a reference type? Remember, the 'default' value of an object reference (type O) is zero, or null. ldloca.s 0 // &[!!T(null)] initobj !!T // ldloc.0 // null box !!T // null brfalse.s // branch taken Because box on a reference type is a no-op, the top of the stack at the brfalse.s is null, and so the branch to CreateInstance is taken. For reference types, Activator.CreateInstance is called which does the full dynamic lookup using reflection. For value types, a simple initobj is called, which is far faster, and also eliminates the unboxing that Activator.CreateInstance has to perform for value types. However, this is strictly a performance optimization; Activator.CreateInstance<T> works for value types as well as reference types. Next... That concludes the initial premise of the Subterranean IL series; to cover the details of generic methods and generic code in IL. I've got a few other ideas about where to go next; however, if anyone has any itching questions, suggestions, or things you've always wondered about IL, do let me know.

    Read the article

  • Why does a bash-zenity script has that title on Unity Panel and that icon on Unity Launcher?

    - by Sadi
    I have this small bash script which helps use Infinality font rendering options via a more user-friendly Zenity window. But whenever I launch it I have this "Color Picker" title on Unity Panel together with the icon assigned for "Color Picker" utility. I wonder why and how this is happening and how I can change it? #!/bin/bash # A simple script to provide a basic, zenity-based GUI to change Infinality Style. # v.1.2 # infinality_current=`cat /etc/profile.d/infinality-settings.sh | grep "USE_STYLE=" | awk -F'"' '{print $2}'` sudo_password="$( gksudo --print-pass --message 'Provide permission to make system changes: Enter your password to start or press Cancel to quit.' -- : 2>/dev/null )" # Check for null entry or cancellation. if [[ ${?} != 0 || -z ${sudo_password} ]] then # Add a zenity message here if you want. exit 4 fi # Check that the password is valid. if ! sudo -kSp '' [ 1 ] <<<"${sudo_password}" 2>/dev/null then # Add a zenity message here if you want. exit 4 fi # menu(){ im="zenity --width=500 --height=490 --list --radiolist --title=\"Change Infinality Style\" --text=\"Current <i>Infinality Style</i> is\: <b>$infinality_current</b>\n? To <i>change</i> it, select any other option below and press <b>OK</b>\n? To <i>quit without changing</i>, press <b>Cancel</b>\" " im=$im" --column=\" \" --column \"Options\" --column \"Description\" " im=$im"FALSE \"DEFAULT\" \"Use default settings - a compromise that should please most people\" " im=$im"FALSE \"OSX\" \"Simulate OSX rendering\" " im=$im"FALSE \"IPAD\" \"Simulate iPad rendering\" " im=$im"FALSE \"UBUNTU\" \"Simulate Ubuntu rendering\" " im=$im"FALSE \"LINUX\" \"Generic Linux style - no snapping or certain other tweaks\" " im=$im"FALSE \"WINDOWS\" \"Simulate Windows rendering\" " im=$im"FALSE \"WIN7\" \"Simulate Windows 7 rendering with normal glyphs\" " im=$im"FALSE \"WINLIGHT\" \"Simulate Windows 7 rendering with lighter glyphs\" " im=$im"FALSE \"VANILLA\" \"Just subpixel hinting\" " im=$im"FALSE \"CLASSIC\" \"Infinality rendering circa 2010 - No snapping.\" " im=$im"FALSE \"NUDGE\" \"Infinality - Classic with lightly stem snapping and tweaks\" " im=$im"FALSE \"PUSH\" \"Infinality - Classic with medium stem snapping and tweaks\" " im=$im"FALSE \"SHOVE\" \"Infinality - Full stem snapping and tweaks without sharpening\" " im=$im"FALSE \"SHARPENED\" \"Infinality - Full stem snapping, tweaks, and Windows-style sharpening\" " im=$im"FALSE \"INFINALITY\" \"Infinality - Standard\" " im=$im"FALSE \"DISABLED\" \"Act without extra infinality enhancements - just subpixel hinting\" " } # option(){ choice=`echo $im | sh -` # if echo $choice | grep "DEFAULT" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"DEFAULT\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "OSX" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"OSX\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "IPAD" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"IPAD\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "UBUNTU" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"UBUNTU\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "LINUX" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"LINUX\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WINDOWS" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WIN7" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS7\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "WINLIGHT" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"WINDOWS7LIGHT\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "VANILLA" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"VANILLA\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "CLASSIC" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"CLASSIC\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "NUDGE" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"NUDGE\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "PUSH" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"PUSH\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "SHOVE" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"SHOVE\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "SHARPENED" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"SHARPENED\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "INFINALITY" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"INFINALITY\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # if echo $choice | grep "DISABLED" > /dev/null; then sudo -Sp '' sed -i "s/USE_STYLE=\"${infinality_current}\"/USE_STYLE=\"DISABLED\"/g" '/etc/profile.d/infinality-settings.sh' <<<"${sudo_password}" fi # } # menu option # if test ${#choice} -gt 0; then echo "Operation completed" fi # exit 0

    Read the article

  • ISACA Webcast follow up: Managing High Risk Access and Compliance with a Platform Approach to Privileged Account Management

    - by Darin Pendergraft
    Last week we presented how Oracle Privileged Account Manager (OPAM) could be used to manage high risk, privileged accounts.  If you missed the webcast, here is a link to the replay: ISACA replay archive (NOTE: you will need to use Internet Explorer to view the archive) For those of you that did join us on the call, you will know that I only had a little bit of time for Q&A, and was only able to answer a few of the questions that came in.  So I wanted to devote this blog to answering the outstanding questions.  Here they are. 1. Can OPAM track admin or DBA activity details during a password check-out session? Oracle Audit Vault is monitoring these activities which can be correlated to check-out events. 2. How would OPAM handle simultaneous requests? OPAM can be configured to allow for shared passwords.  By default sharing is turned off. 3. How long are the passwords valid?  Are the admins required to manually check them in? Password expiration can be configured and set in the password policy according to your corporate standards.  You can specify if you want forced check-in or not. 4. Can 2-factor authentication be used with OPAM? Yes - 2-factor integration with OPAM is provided by integration with Oracle Access Manager, and Oracle Adaptive Access Manager. 5. How do you control access to OPAM to ensure that OPAM admins don't override the functionality to access privileged accounts? OPAM provides separation of duties by using Admin Roles to manage access to targets and privileged accounts and to control which operations admins can perform. 6. How and where are the passwords stored in OPAM? OPAM uses Oracle Platform Security Services (OPSS) Credential Store Framework (CSF) to securely store passwords.  This is the same system used by Oracle Applications. 7. Does OPAM support hierarchical/level based privileges?  Is the log maintained for independent review/audit? Yes. OPAM uses the Fusion Middleware (FMW) Audit Framework to store all OPAM related events in a dedicated audit database.  8. Does OPAM support emergency access in the case where approvers are not available until later? Yes.  OPAM can be configured to release a password under a "break-glass" emergency scenario. 9. Does OPAM work with AIX? Yes supported UNIX version are listed in the "certified component section" of the UNIX connector guide at:http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 10. Does OPAM integrate with Sun Identity Manager? Yes.  OPAM can be integrated with SIM using the REST  APIs.  OPAM has direct integration with Oracle Identity Manager 11gR2. 11. Is OPAM available today and what does it cost? Yes.  OPAM is available now.  Ask your Oracle Account Manager for pricing. 12. Can OPAM be used in SAP environments? Yes, supported SAP version are listed in the "certified component section" of the SAP  connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e25327/intro.htm#autoId0 13. How would this product integrate, if at all, with access to a particular field in the DB that need additional security such as SSN's? OPAM can work with DB Vault and DB Firewall to provide the fine grained access control for databases. 14. Is VM supported? As a deployment platform Oracle VM is supported. For further details about supported Virtualization Technologies see Oracle Fusion Middleware Supported System configurations here: http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html 15. Where did this (OPAM) technology come from? OPAM was built by Oracle Engineering. 16. Are all Linux flavors supported?  How about BSD? BSD is not supported. For supported UNIX version see the "certified component section" of the UNIX connector guide http://docs.oracle.com/cd/E22999_01/doc.111/e17694/intro.htm#autoId0 17. What happens if users don't check passwords in at the end of a work task? In OPAM a time frame can be defined how long a password can be checked out. The security admin can force a check-in at any given time. 18. is MySQL supported? Yes, supported DB version are listed in the "certified component section" of the DB connector guide here: http://docs.oracle.com/cd/E22999_01/doc.111/e28315/intro.htm#BABGJJHA 19. What happens when OPAM crashes and you need to use the password? OPAM can be configured for high availability, but if required, OPAM data can be backed up/recovered.  See the OPAM admin guide. 20. Is OPAM Standalone product or does it leverage other components from IDM? OPAM can be run stand-alone, but will also leverage other IDM components

    Read the article

  • update datagridview using ajax in my asp.net without refreshing the page.(Display real time data)

    - by kurt_jackson19
    I need to display a real time data from MS SQL 2005. I saw some blogs that recommend Ajax to solve my problem. Basically, right now I have my default.aspx page only just for a workaround I could able to display the data from my DB. But once I add data manually to my DB there's no updating made. Any suggestions guys to fix this problem? I need to update datagridview with out refreshing the page. Here's my code on Default.aspx.cs using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Data.SqlClient; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { FillDataGridView(); } protected void up1_Load(object sender, EventArgs e) { FillDataGridView(); } protected void FillDataGridView() { DataSet objDs = new DataSet(); SqlConnection myConnection = new SqlConnection (ConfigurationManager.ConnectionStrings["MainConnStr"].ConnectionString); SqlDataAdapter myCommand; string select = "SELECT * FROM Categories"; myCommand = new SqlDataAdapter(select, myConnection); myCommand.SelectCommand.CommandType = CommandType.Text; myConnection.Open(); myCommand.Fill(objDs); GridView1.DataSource = objDs; GridView1.DataBind(); } } Code on my Default.aspx <html xmlns="http://www.w3.org/1999/xhtml" > <head runat="server"> <title>Ajax Sample</title> </head> <body> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server" EnablePageMethods="true"> <Scripts> <asp:ScriptReference Path="JScript.js" /> </Scripts> </asp:ScriptManager> <asp:UpdatePanel ID="UpdatePanel1" runat="server" OnLoad="up1_Load"> <ContentTemplate> <asp:GridView ID="GridView1" runat="server" Height="136px" Width="325px"/> </ContentTemplate> <Triggers> <asp:AsyncPostBackTrigger ControlID="GridView1" /> </Triggers> </asp:UpdatePanel> </form> </body> </html> My problem now is how to call or use the ajax.js and how to write a code to call the FillDataGridView() in my Default.aspx.cs page. Thank you guys, hope anyone can help me on this problem.

    Read the article

  • Design review for application facing memory issues

    - by Mr Moose
    I apologise in advance for the length of this post, but I want to paint an accurate picture of the problems my app is facing and then pose some questions below; I am trying to address some self inflicted design pain that is now leading to my application crashing due to out of memory errors. An abridged description of the problem domain is as follows; The application takes in a “dataset” that consists of numerous text files containing related data An individual text file within the dataset usually contains approx 20 “headers” that contain metadata about the data it contains. It also contains a large tab delimited section containing data that is related to data in one of the other text files contained within the dataset. The number of columns per file is very variable from 2 to 256+ columns. The original application was written to allow users to load a dataset, map certain columns of each of the files which basically indicating key information on the files to show how they are related as well as identify a few expected column names. Once this is done, a validation process takes place to enforce various rules and ensure that all the relationships between the files are valid. Once that is done, the data is imported into a SQL Server database. The database design is an EAV (Entity-Attribute-Value) model used to cater for the variable columns per file. I know EAV has its detractors, but in this case, I feel it was a reasonable choice given the disparate data and variable number of columns submitted in each dataset. The memory problem Given the fact the combined size of all text files was at most about 5 megs, and in an effort to reduce the database transaction time, it was decided to read ALL the data from files into memory and then perform the following; perform all the validation whilst the data was in memory relate it using an object model Start DB transaction and write the key columns row by row, noting the Id of the written row (all tables in the database utilise identity columns), then the Id of the newly written row is applied to all related data Once all related data had been updated with the key information to which it relates, these records are written using SqlBulkCopy. Due to our EAV model, we essentially have; x columns by y rows to write, where x can by 256+ and rows are often into the tens of thousands. Once all the data is written without error (can take several minutes for large datasets), Commit the transaction. The problem now comes from the fact we are now receiving individual files containing over 30 megs of data. In a dataset, we can receive any number of files. We’ve started seen datasets of around 100 megs coming in and I expect it is only going to get bigger from here on in. With files of this size, data can’t even be read into memory without the app falling over, let alone be validated and imported. I anticipate having to modify large chunks of the code to allow validation to occur by parsing files line by line and am not exactly decided on how to handle the import and transactions. Potential improvements I’ve wondered about using GUIDs to relate the data rather than relying on identity fields. This would allow data to be related prior to writing to the database. This would certainly increase the storage required though. Especially in an EAV design. Would you think this is a reasonable thing to try, or do I simply persist with identity fields (natural keys can’t be trusted to be unique across all submitters). Use of staging tables to get data into the database and only performing the transaction to copy data from staging area to actual destination tables. Questions For systems like this that import large quantities of data, how to you go about keeping transactions small. I’ve kept them as small as possible in the current design, but they are still active for several minutes and write hundreds of thousands of records in one transaction. Is there a better solution? The tab delimited data section is read into a DataTable to be viewed in a grid. I don’t need the full functionality of a DataTable, so I suspect it is overkill. Is there anyway to turn off various features of DataTables to make them more lightweight? Are there any other obvious things you would do in this situation to minimise the memory footprint of the application described above? Thanks for your kind attention.

    Read the article

  • SQL SERVER – Recover the Accidentally Renamed Table

    - by pinaldave
    I have no answer to following question. I saw a desperate email marked as urgent delivered in my mailbox. “I accidentally renamed table in my SSMS. I was scrolling very fast and I made mistakes. It was either because I double clicked or clicked on F2 (shortcut key for renaming). However, I have made the mistake and now I have no idea how to fix this. I am in big trouble. Help me get my original tablename.” I have seen many similar scenarios in my life and they give me a very good opportunity to preach wisdom but when the house is burning, we cannot talk about how we should have conserved the water earlier. The goal at that point is to put off the fire as fast as we can. I decided to answer this email with my best knowledge. If you have renamed the table, I think you pretty much is out of luck. Here are few things which you can do which can give you idea about what your tablename can be if you are lucky. Method 1: (Not Recommended but try your luck) Check your naming convention of your system. I have often seen that many organizations name their index as IX_TableName_Colms or name their keys as FK_TableName1_TableName2_Cols. If your organization is following the same you can get the name from your table, you may refer your keys. Again, note that this is quite possible that your tablename was already renamed and your keys were not updated. This can easily lead you to select incorrect name. I think follow this if you are confident or move to the next method. Method 2: (Not Recommended but try your luck) This method is also based on your orgs naming convention. If you use the name of the table in any columnname (some organizations use tablename in their incremental identity column name), you can get that name from there. Method 3: (Not Recommended but try your luck) If you know where your table was used in your stored procedures, you can script your stored procedure and find the name of the table back. Method 4: (Try your luck) All the best organizations first create a data model of the schema and there is good chance that this table is used there, you should take your chances and refer original document. If your organization is good at managing docs or source code, you will get the name of the table back for sure. Method 5: (It WORKS but try on a development server) There is no sure way to get you the name of the table which you accidentally renamed however, there is one way which will work for sure. You need to take your latest full backup and restore it on your development server (remember not on production or where you have renamed this column). Now restore latest differential file of the full backup. Now restore all the log files one by one making sure that you are restoring before the point of time of you renamed the tablename. Now go to explore and this will give you the name of the table which you have renamed. If you are confident that the same table existed with the same name when the last full backup was made, you do not have to go to all the steps. You can just get the name of the table directly from last backup’s restore. Read the article about Backup Timeline. Wisdom: How can I miss to preach wisdom when I get the opportunity to do so? Here are a few points to remember. Use a different account to explore production environment. Do not use the same account which have all the rights and permissions all the time. Use the account which has read only permissions if there are no modification required. Use policy based management to prevent changes which are accidental. If there was policy of valid names, the accidental change of the table was not possible unless it was intentional delibarate changes. Have a proper auditing of the system in place. You can use DDL triggers but be careful with its usage (get it reviewed properly first). (Add your suggestion here) I guess Method 5 will work all the time (using point in time restore). Everything else is chance of luck and if you are lucky are bad – you will get further incorrect name. Now go back and read the first line of this blog. Out of five method four methods are just lucky guesses. The method 5 will work but again it is a lengthy process if the size of the database is huge or if you do not have full backup. Did I miss anything obvious? Please leave a comment and I will publish your answer with due credit. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • LA SPÉCIALISATION POUR SE DIFFÉRENCIER ET ÊTRE VALORISÉ

    - by michaela.seika(at)oracle.com
    Software. Hardware. Complete. inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ --     LA SPÉCIALISATION  POUR SE DIFFÉRENCIER ET ÊTRE VALORISÉ       Le marché nous demande de plus en plus de solutions et d’engagements. Pour bâtir ces solutions nous nous appuyons sur vous, Partenaires Oracle. En matière d’engagements, Oracle se doit de communiquer auprès du marché quant à la spécialisation de ses partenaires, sur leurs compétences en fonction des projets que les clients nous demandent d’adresser. Plus de 50 spécialisations sont à ce jour disponibles pour les partenaires Gold, Platinum et Diamond : • Sur les produits Technologiques tels que la Base de Données, les options de la Base, la SOA, la Business Intelligence, … • Sur les produits Applicatifs, tels que l’ERP, le CRM, … • Sur les produits Hardware, les Systèmes d’exploitation. Afin de vous aider à vous spécialiser et donc à vous certifier, nos 2 distributeurs à valeur ajoutée, Altimate et Arrow ECS, vous assistent dans cette démarche. ALTIMATE vous propose de participer Lunch & Spécialisation tour Profitez de ces dispositifs qui sont mis en place pour vous afin de vous spécialiser et profiter de tous les bénéfices auxquels vous donne accès la spécialisation. ARROW ECS vous propose de participer : L'Ecole de la spécialisation Oracle by Arrow Profitez de ces dispositifs qui sont mis en place pour vous afin de vous spécialiser et profiter de tous les bénéfices auxquels vous donne accès la spécialisation. Oracle Solutions Tour Découvrez la solution Oracle lors de ce tour de France. Au programme :  roadmaps, ateliers produits et solutions, certifications     BÉNÉFICES en savoir + • l’engagement d’Oracle aux côtés des partenaires pour adresser les grands dossiers • la visibilité auprès des clients pour être identifié comme Expert sur une offre, reconnu et validé par Oracle • le support (accès support gratuit), Oracle University (vouchers pour certifier gratuitement vos équipes de Consultants Implementation) • les budgets Marketing (lead generation, création de campagnes Marketing, être sponsor d’événements clients)   Différenciez-vous en vous spécialisant sur votre domaine d’expertise et accélérez votre succès ! Oracle et ses Distributeurs à Valeur Ajoutée     Eric Fontaine Directeur Alliances & Channel Technologie Europe du Sud vous présente en vidéo la spécialisation et ses avantages.                                         CONTACTS : ORACLE Jean-Jacques PanissiéOracle Partner Development A&C Technology +33 157 60 28 52 ALTIMATE Sophie Daval +33 1 34 58 47 68 ARROW [email protected] +33 1 49 97 59 63          

    Read the article

  • How to read URDU from url and show it to your mobile using java Me

    - by Basit
    HI, Hope you all will be fine. Actually I m facing a problem. Actually i am using Google translation API. What my application does it connect to CGI-script, i pass value to it using GET then the CGI script connect to Google API, Translate the mesage from english to Urdu and then i retreive it.Here is the code [Java] import java.io.*; import javax.microedition.io.*; import javax.microedition.lcdui.*; import javax.microedition.midlet.*; /** * An example MIDlet to invoke a CGI script (GET method). */ public class InvokeCgiMidlet1 extends MIDlet { private Display display; String url = "http://xxx.xxx.xx.xxx/cgi-bin/api/GT/GT_Send_Msg.cgi?message=my%20name%20is%20basit"; public InvokeCgiMidlet1() { display = Display.getDisplay(this); } /** * Initialization. Invoked when we activate the MIDlet. */ public void startApp() { try { getGrade(url); } catch (IOException e) { System.out.println("IOException " + e); e.printStackTrace(); } } /** * Pause, discontinue .... */ public void pauseApp() { } /** * Destroy must cleanup everything. */ public void destroyApp(boolean unconditional) { } /** * Retrieve a grade.... */ void getGrade(String url) throws IOException { HttpConnection c = null; InputStream is = null; OutputStream os = null; StringBuffer b = new StringBuffer(); TextBox t = null; String response; try { c = (HttpConnection)Connector.open(url, Connector.READ_WRITE); c.setRequestMethod(HttpConnection.GET); c.setRequestProperty("User-Agent","Profile/MIDP-1.0 Confirguration/CLDC-1.0"); c.setRequestProperty("Content-Type","application/x-www-form-urlencoded"); c.setRequestProperty("User-Agent","Profile/MIDP-2.0 Configuration/CLDC-1.1"); c.setRequestProperty("Accept-Charset","UTF-8;q=0.7,*;q=0.7"); c.setRequestProperty("Accept-Encoding","gzip, deflate"); c.setRequestProperty("Accept","text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5"); c.setRequestProperty("Content-Language", "en-EN"); os = c.openOutputStream(); Reader r = new InputStreamReader(c.openDataInputStream(), "UTF-8"); int ch; while ((ch = r.read()) != -1) { b.append((char) ch ); //System.out.println((char)ch + "->" + ch + "->" + ch); } t = new TextBox("Final Grades", b.reverse().toString(), 1024, 0); } finally { if(is!= null) { is.close(); } if(os != null) { os.close(); } if(c != null) { c.close(); } } display.setCurrent(t); } } [/Java] The problem is as i told you that the translated text is in Urdu. So when it appear on screen each character is separate like this. ? ?? ? ? ?? . Because i read character by character I want it to appear in proper form like this ??????? So how can i do this. Is there font rendering required. If yes then how can i do it or any other method please hep me. Thanks

    Read the article

  • Configure IPv6 on your Linux system (Ubuntu)

    After the presentation on IPv6 at the first event of the Emtel Knowledge Series and some recent discussion on social media networks with other geeks and Linux interested IT people here in Mauritius, I thought that I should give it a try (finally) and tweak my local network infrastructure. Honestly, I have been to busy with contractual project work and it never really occurred to me to set up IPv6 in my LAN. Well, the following paragraphs are going to shed some light on those aspects of modern computer and network technology. This is the first article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Let's embrace IPv6 The basic configuration on Linux is actually very simple as the kernel, operating system, and user-space programs support that protocol natively. If your system is ready to go for IP (aka: IPv4), then you are good to go for anything else. At least, I didn't have to install any additional packages on my system(s). We are going to assign a static IPv6 address to the system. Hence, we have to modify the definition of interfaces and check whether we have an inet6 entry specified. Open your favourite text editor and check the following entries (it should be at least similar to this): $ sudo nano /etc/network/interfaces auto eth0# IPv4 configurationiface eth0 inet static  address 192.168.1.2  network 192.168.1.0  netmask 255.255.255.0  broadcast 192.168.1.255# IPv6 configurationiface eth0 inet6 static  pre-up modprobe ipv6  address 2001:db8:bad:a55::2  netmask 64 Of course, you might have to adjust your interface device (eth0) or you might be interested to have multiple directives for additional devices (eth1, eth2, etc.). The auto instruction takes care that your device is enabled and configured during the booting phase. The use of the pre-up directive depends on your kernel configuration but in most scenarios this might be an optional line. Anyways, it doesn't hurt to have it enabled after all - just to be on the safe side. Next, either restart your network subsystem like so: $ sudo service networking restart Or you might prefer to do it manually with identical parameters, like so: $ sudo ifconfig eth0 inet6 add 2001:db8:bad:a55::2/64 In case that you're logged in remotely into your PC (ie. via ssh), it is highly advised to opt for the second choice and add the device manually. You can check your configuration afterwards with one of the following commands (depends on whether it is installed): $ sudo ifconfig eth0eth0      Link encap:Ethernet  HWaddr 00:21:5a:50:d7:94            inet addr:192.168.160.2  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: fe80::221:5aff:fe50:d794/64 Scope:Link          inet6 addr: 2001:db8:bad:a55::2/64 Scope:Global          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 $ sudo ip -6 address show eth03: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000    inet6 2001:db8:bad:a55::2/64 scope global        valid_lft forever preferred_lft forever    inet6 fe80::221:5aff:fe50:d794/64 scope link        valid_lft forever preferred_lft forever In both cases, it confirms that our network device has been assigned a valid IPv6 address. That's it in general for your setup on one system. But of course, you might be interested to enable more services for IPv6, especially if you're already running a couple of them in your IP network. More details are available on the official Ubuntu Wiki. Continue to configure your network to provide IPv6 address information automatically in your local infrastructure.

    Read the article

  • jQuery::Scrollable Div does not work

    - by Legend
    I am trying to create a scrollable DIV using the following: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <title>Test</title> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/> <script type="text/javascript" src="lib/jquery/jquery-1.3.2.js"></script> <style type="text/css"> div.container { overflow:hidden; width:200px; height:200px; } div.content { position:relative; top:0; } </style> <script type="text/javascript"> $(function(){ $(".container a.up").bind("click", function(){ var topVal = $(this).parents(".container").find(".content").css("top"); $(this).parents(".container").find(".content").css("top", topVal-10); }); $(".container a.dn").bind("click", function(){ var topVal = $(this).parents(".container").find(".content").css("top"); $(this).parents(".container").find(".content").css("top", topVal+10); }); }); </script> </head> <body> <div class="container"> <p> <a href="#" class="up">Up</a> / <a href="#" class="dn">Down</a> </p> <div class="content"> <p>Hello World 1</p> <p>Hello World 2</p> <p>Hello World 3</p> <p>Hello World 4</p> <p>Hello World 5</p> <p>Hello World 6</p> <p>Hello World 7</p> <p>Hello World 8</p> <p>Hello World 9</p> <p>Hello World 10</p> <p>Hello World 10</p> <p>Hello World 11</p> <p>Hello World 12</p> <p>Hello World 13</p> <p>Hello World 14</p> <p>Hello World 15</p> <p>Hello World 16</p> <p>Hello World 17</p> <p>Hello World 18</p> <p>Hello World 19</p> <p>Hello World 20</p> <p>Hello World 21</p> <p>Hello World 22</p> <p>Hello World 23</p> <p>Hello World 24</p> <p>Hello World 25</p> <p>Hello World 26</p> <p>Hello World 27</p> </div> </div> </body> </html> I don't know where I am messing up, but it simply refuses to work. Any suggestions?

    Read the article

  • SQL SERVER – Not Possible – Delete From Multiple Table – Update Multiple Table in Single Statement

    - by pinaldave
    There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? The answer is – No, You cannot and you should not. SQL Server does not support deleting or updating from two tables in a single update. If you want to delete or update two different tables – you may want to write two different delete or update statements for it. This method has many issues – from the consistency of the data to SQL syntax. Now here is the real reason for this blog post – yesterday I was asked this question again and I replied my canned answer saying it is not possible and it should not be any way implemented that day. In the response to my reply I was pointed out to my own blog post where user suggested that I had previously mentioned this is possible and with demo example. Let us go over my conversation – you may find it interesting. Let us call the user DJ. DJ: Pinal, can we delete multiple table in a single statement or with single delete statement? Pinal: No, you cannot and you should not. DJ: Oh okey, if that is the case, why do you suggest to do that? Pinal: (baffled) I am not suggesting that. I am rather suggesting that it is not possible and it should not be possible. DJ: Hmm… but in that case why did you blog about it earlier? Pinal: (What?) No, I did not. I am pretty confident. DJ: Well, I am confident as well. You did. Pinal: In that case, it is my word against your word. Isn’t it? DJ: I have proof. Do you want to see it that you suggest it is possible? Pinal: Yes, I will be delighted too. (After 10 Minutes) DJ: Here are not one but two of your blog posts which talks about it - SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2 Pinal: Oh! DJ: I know I was correct. Pinal: Well, oh man, I did not mean there what you mean here. DJ: I did not understand can you explain it further. Pinal: Here we go. The example in the other blog is the example of the cascading delete or cascading update. I think you may want to understand the concept of the foreign keys and cascading update/delete. The concept of cascading exists to maintain data integrity. If there primary keys get deleted the update or delete reflects on the foreign key table to maintain the key integrity and data consistency. SQL Server follows ANSI Entry SQL with regard to referential integrity between PrimaryKey and ForeignKey columns which requires the inserting, updating, and deleting of data in related tables to be restricted to values that preserve the integrity. This is all together different concept than deleting multiple values in a single statement. When I hear that someone wants to delete or update multiple table in a single statement what I assume is something very similar to following. DELETE/UPDATE Table 1 (cols) Table 2 (cols) VALUES … which is not valid statement/syntax as well it is not ASNI standards as well. I guess, after this discussion with DJ, I realize I need to do a blog post so I can add the link to this blog post in my canned answer. Well, it was a fun conversation with DJ and I hope it the message is very clear now. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • How do I effectively fake a div's background color using an image in the body element?

    - by janoChen
    I want to get something like the following: The dark grey is the sidebar but I want to apply that color into the body element as an image which repeats itself vertically but at the same time doesn't cover the footer (light gray). (this is the easiest way I found to stretch the color (dark gray) until the bottom.) Part of my CSS: body { color: #888; font-family: Arial, "MS Trebuchet", sans-serif; font-size: 75% } .container { margin: 0 auto; overflow: hidden; padding: 0 15px; width: 960px; } /* header */ #header { background: #444; } /* banner */ #header-top { overflow: hidden; height: 77px; width: 960px; /* ie6 hack */ } #lang { float: right; padding: 50px 0 0 0; } /* work */ #content { background: #EEE; } #content a { border-bottom: 0; } #mainbar { overflow: hidden; margin: 0 10px 0 0; width: 644; float: left; } #sidebar { background: #DDD; color: #777; overflow: hidden; margin: 20px 0 10px 0; padding: 15px; width: 240px; float: right; } #sidebar h3 { color: #888; } #about { margin: 0 0 20px; } /* footer */ #footer { color: #777; background: #DDD; clear: both; } /* contact */ #footer-top { line-height: 160%; overflow: hidden; padding: 30px 0; width: 960px; /* ie6 hack */ } #footer-bottom { font-size: 10px; margin: 15px auto; } Part of my HTML: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7"/> <title>Alex Chen - Web Development, Graphic Design, and Translation</title> <link rel="stylesheet" type="text/css" href="styles/global.css" /> </head> <body id="home"> <div id="header"> <div class="container"> <div id="header-top"> </div> </div><!-- .container --> </div><!-- #header --> <div id="content"> <div class="container"> <div id="mainbar"> </div> <!-- #mainbar--> <div id=sidebar> </div> <!-- #sidebar --> </div><!-- .container --> </div><!-- #content --> <div id="footer"> <div class="container"> <div id="footer-top"> </div><!-- #footer-top --> <div id="footer-bottom"> </div> </div><!-- .container --> </div><!-- #footer --> </body> </html>

    Read the article

  • Reference Data Management and Master Data: Are Relation ?

    - by Mala Narasimharajan
    Submitted By:  Rahul Kamath  Oracle Data Relationship Management (DRM) has always been extremely powerful as an Enterprise Master Data Management (MDM) solution that can help manage changes to master data in a way that influences enterprise structure, whether it be mastering chart of accounts to enable financial transformation, or revamping organization structures to drive business transformation and operational efficiencies, or restructuring sales territories to enable equitable distribution of leads to sales teams following the acquisition of new products, or adding additional cost centers to enable fine grain control over expenses. Increasingly, DRM is also being utilized by Oracle customers for reference data management, an emerging solution space that deserves some explanation. What is reference data? How does it relate to Master Data? Reference data is a close cousin of master data. While master data is challenged with problems of unique identification, may be more rapidly changing, requires consensus building across stakeholders and lends structure to business transactions, reference data is simpler, more slowly changing, but has semantic content that is used to categorize or group other information assets – including master data – and gives them contextual value. In fact, the creation of a new master data element may require new reference data to be created. For example, when a European company acquires a US business, chances are that they will now need to adapt their product line taxonomy to include a new category to describe the newly acquired US product line. Further, the cross-border transaction will also result in a revised geo hierarchy. The addition of new products represents changes to master data while changes to product categories and geo hierarchy are examples of reference data changes.1 The following table contains an illustrative list of examples of reference data by type. Reference data types may include types and codes, business taxonomies, complex relationships & cross-domain mappings or standards. Types & Codes Taxonomies Relationships / Mappings Standards Transaction Codes Industry Classification Categories and Codes, e.g., North America Industry Classification System (NAICS) Product / Segment; Product / Geo Calendars (e.g., Gregorian, Fiscal, Manufacturing, Retail, ISO8601) Lookup Tables (e.g., Gender, Marital Status, etc.) Product Categories City à State à Postal Codes Currency Codes (e.g., ISO) Status Codes Sales Territories (e.g., Geo, Industry Verticals, Named Accounts, Federal/State/Local/Defense) Customer / Market Segment; Business Unit / Channel Country Codes (e.g., ISO 3166, UN) Role Codes Market Segments Country Codes / Currency Codes / Financial Accounts Date/Time, Time Zones (e.g., ISO 8601) Domain Values Universal Standard Products and Services Classification (UNSPSC), eCl@ss International Classification of Diseases (ICD) e.g., ICD9 à IC10 mappings Tax Rates Why manage reference data? Reference data carries contextual value and meaning and therefore its use can drive business logic that helps execute a business process, create a desired application behavior or provide meaningful segmentation to analyze transaction data. Further, mapping reference data often requires human judgment. Sample Use Cases of Reference Data Management Healthcare: Diagnostic Codes The reference data challenges in the healthcare industry offer a case in point. Part of being HIPAA compliant requires medical practitioners to transition diagnosis codes from ICD-9 to ICD-10, a medical coding scheme used to classify diseases, signs and symptoms, causes, etc. The transition to ICD-10 has a significant impact on business processes, procedures, contracts, and IT systems. Since both code sets ICD-9 and ICD-10 offer diagnosis codes of very different levels of granularity, human judgment is required to map ICD-9 codes to ICD-10. The process requires collaboration and consensus building among stakeholders much in the same way as does master data management. Moreover, to build reports to understand utilization, frequency and quality of diagnoses, medical practitioners may need to “cross-walk” mappings -- either forward to ICD-10 or backwards to ICD-9 depending upon the reporting time horizon. Spend Management: Product, Service & Supplier Codes Similarly, as an enterprise looks to rationalize suppliers and leverage their spend, conforming supplier codes, as well as product and service codes requires supporting multiple classification schemes that may include industry standards (e.g., UNSPSC, eCl@ss) or enterprise taxonomies. Aberdeen Group estimates that 90% of companies rely on spreadsheets and manual reviews to aggregate, classify and analyze spend data, and that data management activities account for 12-15% of the sourcing cycle and consume 30-50% of a commodity manager’s time. Creating a common map across the extended enterprise to rationalize codes across procurement, accounts payable, general ledger, credit card, procurement card (P-card) as well as ACH and bank systems can cut sourcing costs, improve compliance, lower inventory stock, and free up talent to focus on value added tasks. Change Management: Point of Sales Transaction Codes and Product Codes In the specialty finance industry, enterprises are confronted with usury laws – governed at the state and local level – that regulate financial product innovation as it relates to consumer loans, check cashing and pawn lending. To comply, it is important to demonstrate that transactions booked at the point of sale are posted against valid product codes that were on offer at the time of booking the sale. Since new products are being released at a steady stream, it is important to ensure timely and accurate mapping of point-of-sale transaction codes with the appropriate product and GL codes to comply with the changing regulations. Multi-National Companies: Industry Classification Schemes As companies grow and expand across geographies, a typical challenge they encounter with reference data represents reconciling various versions of industry classification schemes in use across nations. While the United States, Mexico and Canada conform to the North American Industry Classification System (NAICS) standard, European Union countries choose different variants of the NACE industry classification scheme. Multi-national companies must manage the individual national NACE schemes and reconcile the differences across countries. Enterprises must invest in a reference data change management application to address the challenge of distributing reference data changes to downstream applications and assess which applications were impacted by a given change. References 1 Master Data versus Reference Data, Malcolm Chisholm, April 1, 2006.

    Read the article

  • WIF, ADFS 2 and WCF&ndash;Part 5: Service Client (more Flexibility with WSTrustChannelFactory)

    - by Your DisplayName here!
    See the previous posts first. WIF includes an API to manually request tokens from a token service. This gives you more control over the request and more flexibility since you can use your own token caching scheme instead of being bound to the channel object lifetime. The API is straightforward. You first request a token from the STS and then use that token to create a channel to the relying party service. I’d recommend using the WS-Trust bindings that ship with WIF to talk to ADFS 2 – they are pre-configured to match the binding configuration of the ADFS 2 endpoints. The following code requests a token for a WCF service from ADFS 2: private static SecurityToken GetToken() {     // Windows authentication over transport security     var factory = new WSTrustChannelFactory(         new WindowsWSTrustBinding(SecurityMode.Transport),         stsEndpoint);     factory.TrustVersion = TrustVersion.WSTrust13;       var rst = new RequestSecurityToken     {         RequestType = RequestTypes.Issue,         AppliesTo = new EndpointAddress(svcEndpoint),         KeyType = KeyTypes.Symmetric     };       var channel = factory.CreateChannel();     return channel.Issue(rst); } Afterwards, the returned token can be used to create a channel to the service. Again WIF has some helper methods here that make this very easy: private static void CallService(SecurityToken token) {     // create binding and turn off sessions     var binding = new WS2007FederationHttpBinding(         WSFederationHttpSecurityMode.TransportWithMessageCredential);     binding.Security.Message.EstablishSecurityContext = false;       // create factory and enable WIF plumbing     var factory = new ChannelFactory<IService>(binding, new EndpointAddress(svcEndpoint));     factory.ConfigureChannelFactory<IService>();       // turn off CardSpace - we already have the token     factory.Credentials.SupportInteractive = false;       var channel = factory.CreateChannelWithIssuedToken<IService>(token);       channel.GetClaims().ForEach(c =>         Console.WriteLine("{0}\n {1}\n  {2} ({3})\n",             c.ClaimType,             c.Value,             c.Issuer,             c.OriginalIssuer)); } Why is this approach more flexible? Well – some don’t like the configuration voodoo. That’s a valid reason for using the manual approach. You also get more control over the token request itself since you have full control over the RST message that gets send to the STS. One common parameter that you may want to set yourself is the appliesTo value. When you use the automatic token support in the WCF federation binding, the appliesTo is always the physical service address. This means in turn that this address will be used as the audience URI value in the SAML token. Well – this in turn means that when you have an application that consists of multiple services, you always have to configure all physical endpoint URLs in ADFS 2 and in the WIF configuration of the service(s). Having control over the appliesTo allows you to use more symbolic realm names, e.g. the base address or a completely logical name. Since the URL is never de-referenced you have some degree of freedom here. In the next post we will look at the necessary code to request multiple tokens in a call chain. This is a common scenario when you first have to acquire a token from an identity provider and have to send that on to a federation gateway or Resource STS. Stay tuned.

    Read the article

  • IE / Facebook Issue : Why Facebook Like box not display in Internet Explorer 6 - IE8 ?

    - by Vaibhav Bhalke
    IE / Facebook Issue : Why Facebook Like box not display in Internet Explorer6 - IE8 ? Facebook like box display throgh my web application on every browser excluding IE-IE8 Now final Application.html file contains are < !DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/DTD/strict.dtd"><BR> < html xmlns="http://www.w3.org/1999/xhtml" xmlns:fb="http://www.facebook.com/2008/fbml"> <BR>< head> < meta http-equiv="X-UA-Compatible" content="IE=EmulateIE7" /> < /head><BR> < body> < script type="text/javascript" language="javascript" src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php"> < /script> <BR> < script type="text/javascript"> FB_RequireFeatures(["Connect"], function(){ var x=1; } ); < /script> <BR> < script src="http://static.ak.connect.facebook.com/connect.php/en_US" type="text/javascript"> < /script> < /body> < /html> My Java code for LIke Box is as follows FBPageFanWidget.java class FBPageFanWidget extends Composite { public FBPageFanWidget() { VerticalPanel mainPanel = new VerticalPanel(); mainPanel .getElement() .setInnerHTML( "< script type='text/javascript' src='http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php/en_US'>< /script>< script type='text/javascript'>FB.init('');< /script>< fb:fan profile_id=\"113106068709539\" stream=\"0\" connections=\"10\" logobar=\"0\" width=\"244\" height=\"240\" css='http://127.0.01:8080/webapplicationname/facebook.css?1'>< /fb:fan>"); initWidget(mainPanel); } } We used proper facebook API_KEY & PAGE_ID It's very important for us to Show Facebook like Box in Our web application Because we have more IE users. If we add FBPageFanWidget.java in our web applicaton then Our Home page is not display in IE because we add Facebook LikeBox so we made changes in Our FBPageFanWidget.java class FBPageFanWidget extends Composite { public FBPageFanWidget() { VerticalPanel mainPanel = new VerticalPanel(); if (!isIE()) { mainPanel.getElement() .setInnerHTML("<script type='text/javascript' src='http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php/en_US'></script><script type='text/javascript'>FB.init('');</script><fb:fan profile_id=\"113106068709539\" stream=\"0\" connections=\"10\" logobar=\"0\" width=\"244\" height=\"240\" css='http://127.0.01:8080/webapplicationname/facebook.css?1'></fb:fan>"); } initWidget(mainPanel); } public native String getUserAgent() /*-{ return navigator.userAgent; }-*/; private boolean isIE() { return (getUserAgent().indexOf("MSIE") > -1); } } when we did this changes Then Facebook Like Box display in every browser excluding IE6 - IE8 :( and also display Our Home page in IE8 excludeing Facebook Like Box. It means There is probelm in IE ? or what changes i need to do in my html file or java file to show facebook like Box properly with displaying our home page It's very important for us to Show Facebook like Box in Our web application Because we have more IE users. Please Reply ASAP. Hope-for Best Co-operation from your side !!!!

    Read the article

  • CodePlex Daily Summary for Sunday, August 12, 2012

    CodePlex Daily Summary for Sunday, August 12, 2012Popular ReleasesThisismyusername's codeplex page.: Run! Thunderstorm! Classic Multiplatform: The classic edition now on any device, you don't have to get the touch edition for touch screens!SharePoint Developers & Admins: SPTaxCleaner Tool - Cleaner Taxonomy Data: For more information on the tool and how to use it: SPTaxCleaner Run the tool at your own risk!BugNET Issue Tracker: BugNET 1.1: This release includes bug fixes from the 1.0 release for email notifications, RSS feeds, and several other issues. Please see the change log for a full list of changes. http://support.bugnetproject.com/Projects/ReleaseNotes.aspx?pid=1&m=76 Upgrade Notes The following changes to the web.config in the profile section have occurred: Removed <add name="NotificationTypes" type="String" defaultValue="Email" customProviderData="NotificationTypes;nvarchar;255" />Added <add name="ReceiveEmailNotifi...Virtual Keyboard: Virtual Keyboard v2.0 Source Code: This release has a few added keys that were missing in the earlier versions.????: ????2.0.5: 1、?????????????。RiP-Ripper & PG-Ripper: PG-Ripper 1.4.01: changes NEW: Added Support for Clipboard Function in Mono Version NEW: Added Support for "ImgBox.com" links FIXED: "PixHub.eu" links FIXED: "ImgChili.com" links FIXED: Kitty-Kats Forum loginPlayer Framework by Microsoft: Player Framework for Windows 8 (Preview 5): Support for Smooth Streaming SDK beta 2 Support for live playback New bitrate meter and SD/HD indicators Auto smooth streaming track restriction for snapped mode to conserve bandwidth New "Go Live" button and SeekToLive API Support for offset start times Support for Live position unique from end time Support for multiple audio streams (smooth and progressive content) Improved intellisense in JS version Support for Windows 8 RTM ADDITIONAL DOWNLOADSSmooth Streaming Client SD...Media Companion: Media Companion 3.506b: This release includes an update to the XBMC scrapers, for those who prefer to use this method. There were a number of behind-the-scene tweaks to make Media Companion compatible with the new TMDb-V3 API, so it was considered important to get it out to the users to try it out. Please report back any important issues you might find. For this reason, unless you use the XBMC scrapers, there probably isn't any real necessity to download this one! The only other minor change was one to allow the mc...Avian Mortality Detection Entry Application: Detection Entry: The most recent and up-to-date version of the Detection Entry application. Please click on the CLICKONCE link above to install.NVorbis: NVorbis v0.3: Fix Ogg page reader to correctly handle "split" packets Fix "zero-energy" packet handling Fix packet reader to always merge packets when needed Add statistics properties to VorbisReader.Stats Add multi-stream API (for Ogg files containing multiple Vorbis streams)The Ftp Library - Most simple, full-featured .NET Ftp Library: TheFtpLibrary v1.0: First version. Please report any bug and discuss how to improve this library in 'Discussion' section.VisioAutomation: Visio Power Tools 2010 (Beta): Visio Power Tools 2010 An Add-in for Visio 2010. In Beta but should be very stable. Some Cool Features Import Colors - From Kuler, ColourLovers, or manually enter RGB values Create a Document containing images of all masters in multiple stencils Toggle text case Copy All text from all shapes Export document to XHTML (with embedded SVG) Export document to XAML Updates2012-08-10 - Fixed path handling when generating the HTML stencil catalogXBMC for LCDSmartie: v 0.7.1: Changes: Version 0.7.1 - Removed unused configuration options: You can remove the user and pass section from your LCDSmartie.exe.config Version 0.7.0 - Changed JSON-Protocol to TCP: This means the plugin will run much faster and chances that LCDSmartie will stutter are minimized Note: You will have to change some settings in your LCDSmartie.exe.config, as TCP uses a different port than HTTP (9090 is default in XBMC)WPF RSS Feed Reader: WPF RSS Feed Reader 4.1: WPF RSS Feed Reader 4.1.x This application has been developed using Visual Studio 2010 with .NET 4.0 and Microsoft Expression Blend. It makes use of the MVVM Light Toolkit, more information can be found here Fixed bug when first installed - handle null reference on _proxySetting Fixed bug if trying to open error log but error log doesn't exist yet Added strings to resources Moved AsssemblyInfo centrally and linked to other projectsScutex: Scutex 4th Preview Release 0.4: This is the fourth preview release for the new Scutex 0.4 Beta. This preview release is unstable and contains some UI issues that are being worked on due to an installation of a brand new GUI. The stable version of the 0.4 Beta will be out shortly as soon as this release passes QA. The following were fixed in this release *Fixed an issue with the Upload Product dialog that would cause it to throw an exception. Known Issues *The download service logs grid does not display properlySystem.Net.FtpClient: System.Net.FtpClient 2012.08.08: 2012.08.08 Release. Several changes, see commit notes in source code section. CHM help as well as source for this release are included in the download. Remember that Windows 7 by default (and possibly older versions) will block you from opening the CHM by default due to trust settings. To get around the problem, right click on the CHM, choose properties and click the Un-block button. Please note that this will be the last release tested to compile with the .net 2.0 framework. I will be remov...Isis2 Cloud Computing Library: Isis2 Alpha V1.1.967: This is an alpha pre-release of the August 2012 version of the system. I've been testing and fixing many problems and have also added a new group "lock" API (g.Lock("lock name")/g.Unlock/g.Holder/g.SetLockPolicy); with this, Isis2 can mimic Chubby, Google's locking service. I wouldn't say that the system is entirely stable yet, and I haven't rechecked every single problem I had seen in May/June, but I think it might be good to get some additional use of this release. By now it does seem to...JSON C# Class Generator: JSON CSharp Class Generator 1.3: Support for native JSON.net serializer/deserializer (POCO) New classes layout option: nested classes Better handling of secondary classesAxiom 3D Rendering Engine: v0.8.3376.12322: Changes Since v0.8.3102.12095 ===================================================================== Updated ndoc3 binaries to fix bug Added uninstall.ps1 to nuspec packages fixed revision component in version numbering Fixed sln referencing VS 11 Updated OpenTK Assemblies Added CultureInvarient to numeric parsing Added First Visual Studio 2010 Project Template (DirectX9) Updated SharpInputSystem Assemblies Backported fix for OpenGL Auto-created window not responding to input Fixed freeInterna...DotSpatial: DotSpatial 1.3: This is a Minor Release. See the changes in the issue tracker. Minimal -- includes DotSpatial core and essential extensions Extended -- includes debugging symbols and additional extensions Tutorials are available. Just want to run the software? End user (non-programmer) version available branded as MapWindow Want to add your own feature? Develop a plugin, using the template and contribute to the extension feed (you can also write extensions that you distribute in other ways). Components ...New ProjectsAppFabric Cache Admin Tool: Appfabric Admin tool is a GUI tool provided for Cache Administration. This tool can be used for both Appfabric 1.0 and 1.1 versions.Bootstrap XML Navigation: Bootstrap XML Navigation is an ASP.NET Server Control which provide the ability to off-load your Bootstrap navigation structure to external XML files.Classic MVC: Classic MVC is an MVC framework written entirely in JScript for Microsoft's Classic ASP. Separation of concerns, controllers, partial views and more.Component Spectrum: Component Spectrum is an E-Commerce Application project.Developments in Java: Free Develop JavaDongGPrj: DongGPrjEsShop: ???????,????。。。F# Indent Visual Studio addon: The addon to show how to use SmartIndent and change the indent behavior in F# content.FizzBuzzAssignment: This is HeadSpring assignment.hotfighter: a act game!!!iSoilHotel: It is a hotel management system that using Microsoft N-Layer Architecture as the primary thoughts.KVBL (K's VB6 library) VB6????,????????.: KVBL??VB6??????、???????,????????,??????,????VB????,??????????。????:??????、???、??、????、??、????、??、??、?????、?????、???????,???????……Meteorite: c# game 3d wp7 space ship meteoriteMetro App Helpers: Helper classes for building Metro style app on Windows 8.MobileShop: Mobile shop for Iphone & AndroidMomentum: this is q project pqrticipqtion between meir and david concerning student university project on physics.OneLine: dfgfdgPreflight: Summary HereResilient FileSystemWatcher for monitoring network folders: Drop in replacement for FileSystemWatcher that works for network folders and compensate for network outages, server restarts ect.SkyPoint: A Windows Phone 7 app aimed at novice astronomers.test20120811: summary of this projectVS Unbind Source Control: Remove Source Control bindings from Visual Studio Solution and Project FilesWcf SWA (SFTI) implementation: First releasewebstart: new web tech testwinform_framework: winform_frameworkwtopenapi: wtopen sina weibo sdk txweibosdk qzone sdk

    Read the article

  • What Keeps You from Changing Your Public IP Address and Wreaking Havoc on the Internet?

    - by Jason Fitzpatrick
    What exactly is preventing you (or anyone else) from changing their IP address and causing all sorts of headaches for ISPs and other Internet users? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Whitemage is curious about what’s preventing him from wantonly changing his IP address and causing trouble: An interesting question was asked of me and I did not know what to answer. So I’ll ask here. Let’s say I subscribed to an ISP and I’m using cable internet access. The ISP gives me a public IP address of 60.61.62.63. What keeps me from changing this IP address to, let’s say, 60.61.62.75, and messing with another consumer’s internet access? For the sake of this argument, let’s say that this other IP address is also owned by the same ISP. Also, let’s assume that it’s possible for me to go into the cable modem settings and manually change the IP address. Under a business contract where you are allocated static addresses, you are also assigned a default gateway, a network address and a broadcast address. So that’s 3 addresses the ISP “loses” to you. That seems very wasteful for dynamically assigned IP addresses, which the majority of customers are. Could they simply be using static arps? ACLs? Other simple mechanisms? Two things to investigate here, why can’t we just go around changing our addresses, and is the assignment process as wasteful as it seems? The Answer SuperUser contributor Moses offers some insight: Cable modems aren’t like your home router (ie. they don’t have a web interface with simple point-and-click buttons that any kid can “hack” into). Cable modems are “looked up” and located by their MAC address by the ISP, and are typically accessed by technicians using proprietary software that only they have access to, that only runs on their servers, and therefore can’t really be stolen. Cable modems also authenticate and cross-check settings with the ISPs servers. The server has to tell the modem whether it’s settings (and location on the cable network) are valid, and simply sets it to what the ISP has it set it for (bandwidth, DHCP allocations, etc). For instance, when you tell your ISP “I would like a static IP, please.”, they allocate one to the modem through their servers, and the modem allows you to use that IP. Same with bandwidth changes, for instance. To do what you are suggesting, you would likely have to break into the servers at the ISP and change what it has set up for your modem. Could they simply be using static arps? ACLs? Other simple mechanisms? Every ISP is different, both in practice and how close they are with the larger network that is providing service to them. Depending on those factors, they could be using a combination of ACL and static ARP. It also depends on the technology in the cable network itself. The ISP I worked for used some form of ACL, but that knowledge was a little beyond my paygrade. I only got to work with the technician’s interface and do routine maintenance and service changes. What keeps me from changing this IP address to, let’s say, 60.61.62.75 and mess with another consumer’s internet access? Given the above, what keeps you from changing your IP to one that your ISP hasn’t specifically given to you is a server that is instructing your modem what it can and can’t do. Even if you somehow broke into the modem, if 60.61.62.75 is already allocated to another customer, then the server will simply tell your modem that it can’t have it. David Schwartz offers some additional insight with a link to a white paper for the really curious: Most modern ISPs (last 13 years or so) will not accept traffic from a customer connection with a source IP address they would not route to that customer were it the destination IP address. This is called “reverse path forwarding”. See BCP 38. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • Database Owner Conundrum

    - by Johnm
    Have you ever restored a database from a production environment on Server A into a development environment on Server B and had some items, such as Service Broker, mysteriously cease functioning? You might want to consider reviewing the database owner property of the database. The Scenario Recently, I was developing some messaging functionality that utilized the Service Broker feature of SQL Server in a development environment. Within the instance of the development environment resided two databases: One was a restored version of a production database that we will call "RestoreDB". The second database was a brand new database that has yet to exist in the production environment that we will call "DevDB". The goal is to setup a communication path between RestoreDB and DevDB that will later be implemented into the production database. After implementing all of the Service Broker objects that are required to communicate within a database as well as between two databases on the same instance I found myself a bit confounded. My testing was showing that the communication was successful when it was occurring internally within DevDB; but the communication between RestoreDB and DevDB did not appear to be working. Profiler to the rescue After carefully reviewing my code for any misspellings, missing commas or any other minor items that might be a syntactical cause of failure, I decided to launch Profiler to aid in the troubleshooting. After simulating the cross database messaging, I noticed the following error appearing in Profiler: An exception occurred while enqueueing a message in the target queue. Error: 33009, State: 2. The database owner SID recorded in the master database differs from the database owner SID recorded in database '[Database Name Here]'. You should correct this situation by resetting the owner of database '[Database Name Here]' using the ALTER AUTHORIZATION statement. Now, this error message is a helpful one. Not only does it identify the issue in plain language, it also provides a potential solution. An execution of the following query that utilizes the catalog view sys.transmission_queue revealed the same error message for each communication attempt: SELECT     * FROM        sys.transmission_queue; Seeing the situation as a learning opportunity I dove a bit deeper. Reviewing the database properties  The owner of a specific database can be easily viewed by right-clicking the database in SQL Server Management Studio and selecting the "properties" option. The owner is listed on the "General" page of the properties screen. In my scenario, the database in the production server was created by Frank the DBA; therefore his server login appeared as the owner: "ServerName\Frank". While this is interesting information, it certainly doesn't tell me much in regard to the SID (security identifier) and its existence, or lack thereof, in the master database as the error suggested. I pulled together the following query to gather more interesting information: SELECT     a.name     , a.owner_sid     , b.sid     , b.name     , b.type_desc FROM        master.sys.databases a     LEFT OUTER JOIN master.sys.server_principals b         ON a.owner_sid = b.sid WHERE     a.name not in ('master','tempdb','model','msdb'); This query also helped identify how many other user databases in the instance were experiencing the same issue. In this scenario, I saw that there were no matching SIDs in server_principals to the owner SID for my database. What login should be used as the database owner instead of Frank's? The system stored procedure sp_helplogins will provide a list of the valid logins that can be used. Here is an example of its use, revealing all available logins: EXEC sp_helplogins;  Fixing a hole The error message stated that the recommended solution was to execute the ALTER AUTHORIZATION statement. The full statement for this scenario would appear as follows: ALTER AUTHORIZATION ON DATABASE:: [Database Name Here] TO [Login Name]; Another option is to execute the following statement using the sp_changedbowner system stored procedure; but please keep in mind that this stored procedure has been deprecated and will likely disappear in future versions of SQL Server: EXEC dbo.sp_changedbowner @loginname = [Login Name]; .And They Lived Happily Ever After Upon changing the database owner to an existing login and simulating the inner and cross database messaging the errors have ceased. More importantly, all messages sent through this feature now successfully complete their journey. I have added the ownership change to my restoration script for the development environment.

    Read the article

  • Swap partition not recognized (The disk drive with UUID=... is not ready yet or not present)

    - by ladaghini
    I think I had an encrypted swap partition, because I chose to encrypt my home directory during the installation. I believe that's what the line with /dev/mapper/cryptswap1 ... in my /etc/fstab is all about. I did something to bork my swap because on the next boot, I got a message (paraphrased): The disk drive for /dev/mapper/cryptswap1 is not ready yet or not present. Wait to continue. Press S to skip or M to manually recover. (As a side note, pressing S or M seemed to do nothing different from just waiting.) Here's what I've tried: This tutorial on how to fix the swap partition not mounting. However, in the above, the mkswap command fails because the device is busy. So I booted from a live USB, ran GParted to reformat the swap partition (which showed up as an unknown fs type), and chrooted into the broken system to try that tutorial again. I also adjusted /etc/initramfs-tools/conf.d/resume and /etc/fstab to reflect the new UUID generated from formatting the partition as a swap. That still didn't work; instead of /dev/mapper/cryptswap1 not present, "The disk drive with UUID=[swap partition's UUID] is not ready yet or not present..." So I decided to start afresh as though I never had created a swap partition in the first place. From the Live USB, I deleted the swap partition altogether (which, again showed up in GParted as an unknown fs type), removed the swap and cryptswap entries in /etc/fstab as well as removed /etc/initramfs-tools/conf.d/resume and /etc/crypttab. At this point the main system shouldn't be considered broken because there is no swap partition and no instructions to mount one, right? I didn't get any errors during startup. I followed the same instructions to create and encrypt the swap partition, starting with creating a partition for the swap, though I think fdisk said a reboot was necessary to see changes. I was confident the 3rd process above would work, but the problem yet persists. Some relevant info (/dev/sda8 is the swap partition): /etc/fstab file: # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda6 during installation UUID=4c11e82c-5fe9-49d5-92d9-cdaa6865c991 / ext4 errors=remount-ro 0 1 # /boot was on /dev/sda5 during installation UUID=4031413e-e89f-49a9-b54c-e887286bb15e /boot ext4 defaults 0 2 # /home was on /dev/sda7 during installation UUID=d5bbfc6f-482a-464e-9f26-fd213230ae84 /home ext4 defaults 0 2 # swap was on /dev/sda8 during installation UUID=5da2c720-8787-4332-9317-7d96cf1e9b80 none swap sw 0 0 /dev/mapper/cryptswap1 none swap sw 0 0 output of sudo mount: /dev/sda6 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) sysfs on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) udev on /dev type devtmpfs (rw,mode=0755) devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) tmpfs on /run type tmpfs (rw,noexec,nosuid,size=10%,mode=0755) none on /run/lock type tmpfs (rw,noexec,nosuid,nodev,size=5242880) none on /run/shm type tmpfs (rw,nosuid,nodev) /dev/sda5 on /boot type ext4 (rw) /dev/sda7 on /home type ext4 (rw) /home/undisclosed/.Private on /home/undisclosed type ecryptfs (ecryptfs_check_dev_ruid,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_unlink_sigs,ecryptfs_sig=cbae1771abd34009,ecryptfs_fnek_sig=7cefe2f59aab8e58) gvfs-fuse-daemon on /home/undisclosed/.gvfs type fuse.gvfs-fuse-daemon (rw,nosuid,nodev,user=undisclosed) output of sudo blkid (note that /dev/sda8 is missing): /dev/sda1: LABEL="SYSTEM" UUID="960490E80490CC9D" TYPE="ntfs" /dev/sda2: UUID="D4043140043126C0" TYPE="ntfs" /dev/sda3: LABEL="Shared" UUID="80F613E1F613D5EE" TYPE="ntfs" /dev/sda5: UUID="4031413e-e89f-49a9-b54c-e887286bb15e" TYPE="ext4" /dev/sda6: UUID="4c11e82c-5fe9-49d5-92d9-cdaa6865c991" TYPE="ext4" /dev/sda7: UUID="d5bbfc6f-482a-464e-9f26-fd213230ae84" TYPE="ext4" /dev/mapper/cryptswap1: UUID="41fa147a-3e2c-4e61-b29b-3f240fffbba0" TYPE="swap" output of sudo fdisk -l: Disk /dev/mapper/cryptswap1 doesn't contain a valid partition table Disk /dev/sda: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xdec3fed2 Device Boot Start End Blocks Id System /dev/sda1 * 2048 409599 203776 7 HPFS/NTFS/exFAT /dev/sda2 409600 210135039 104862720 7 HPFS/NTFS/exFAT /dev/sda3 210135040 415422463 102643712 7 HPFS/NTFS/exFAT /dev/sda4 415424510 625141759 104858625 5 Extended /dev/sda5 415424512 415922175 248832 83 Linux /dev/sda6 415924224 515921919 49998848 83 Linux /dev/sda7 515923968 621389823 52732928 83 Linux /dev/sda8 621391872 625141759 1874944 82 Linux swap / Solaris Disk /dev/mapper/cryptswap1: 1919 MB, 1919942656 bytes 255 heads, 63 sectors/track, 233 cylinders, total 3749888 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xaf5321b5 /etc/initramfs-tools/conf.d/resume file: RESUME=UUID=5da2c720-8787-4332-9317-7d96cf1e9b80 /etc/crypttab file: cryptswap1 /dev/sda8 /dev/urandom swap,cipher=aes-cbc-essiv:sha256 output of sudo swapon -as: Filename Type Size Used Priority /dev/mapper/cryptswap1 partition 1874940 0 -1 output of sudo free -m: total used free shared buffers cached Mem: 1476 1296 179 0 35 671 -/+ buffers/cache: 590 886 Swap: 1830 0 1830 So, how can this be fixed?

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >