Search Results

Search found 35784 results on 1432 pages for 'number format'.

Page 254/1432 | < Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >

  • DXVA and ATI.... what am I missing?

    - by Shiki
    ATI Radeon HD3650 ffdshow (the one comes with CCCP 2010-10 AND the latest from Free-codecs site) Catalyst 10.12 Latest DirectX, Windows 7 HP x64 If I try to play it simply, the player (mpc-hc) won't use the acceleration. If I disable checks, I get artifacts. Back then it worked perfectly with my Intel and ATI too (Lenovo ThinkPad T500, switchable graphics). This only appeared recently. What should I do? What am I missing? If possible, I don't really want to downgrade drivers (waaa ATI drivers s.ck ... sorry but.. honestly), because the older one got a buggy DP driver so my external display gets a totally wrong resolution. Info about the media file: Video: Format : AVC Format/Info : Advanced Video Codec Format profile : [email protected] Codec ID : V_MPEG4/ISO/AVC Writing library : x264 core 88 r1462 7fcffde Encoding settings : cabac=1 / ref=8 / deblock=1:0:0 / analyse=0x3:0x133 / me=umh / subme=8 / psy=1 / psy_rd=0.80:0.20 / mixed_ref=1 / me_range=24 / chroma_me=1 / trellis=2 / 8x8dct=1 / cqm=0 / deadzone=21,11 / fast_pskip=1 / chroma_qp_offset=-3 / threads=3 / sliced_threads=0 / nr=0 / decimate=1 / mbaff=0 / constrained_intra=0 / bframes=8 / b_pyramid=2 / b_adapt=2 / b_bias=0 / direct=3 / wpredb=1 / wpredp=2 / keyint=250 / keyint_min=25 / scenecut=40 / intra_refresh=0 / rc_lookahead=40 / rc=crf / mbtree=1 / crf=16.0 / qcomp=0.60 / qpmin=10 / qpmax=51 / qpstep=4 / ip_ratio=1.40 / aq=1:1.00 The sample file I'm testing at the moment: [Coalgirls]_Kiddy_Girl-and_01_(1920x1080_Blu-Ray_FLAC)_[91428679].mkv (I tried many x264 files, no use.) Artifacts

    Read the article

  • Code to update HyperV Export file

    - by Andy Schneider
    I am using the HyperV Module from Codeplex to do a "config only" export from a 2008R2 Hyper-V server. In order to import the configuration on another HyperV server, I need to edit the value of CopyVMStorage in the EXP file. This file is an XML file. I wrote the following code in PowerShell to do the update for me. The variable $existing is the existing exp file. $xml = [xml](get-content $existing) $xpath = '//PROPERTY[@NAME ="CopyVmStorage"]' foreach ($node in $xml.SelectNodes($xpath)) {$node.Value = 'TRUE'} $xml.Save($existing) This code makes the correct changes to the XML. However, when I go to import the file on the Hyper-V server, I get an error that says the file format is incorrect. I am wondering if the encoding of the file is incorrect or if there is something else going on. If I edit the file manually in wordpad, it imports without an issue. The filename is a GUID with a .exp extension, and it appears that the file name is too long for notepad to open. Notepad throws an error trying to open the file, which is why I went with WordPad. I have noticed that the file that is updated with PowerShell comes out formatted whereas the raw file is xml all bunched together with no whitespace. Any ideas on what "file format" means in this HyperV error message and how I might be able to use my code to automate this change in the XML without changing the file format?

    Read the article

  • How to convert Markdown files to Dokuwiki, on a PC

    - by Clare Macrae
    I'm looking for a tool or script to convert Markdown files to Dokuwiki format, that will run on a PC. This is so that I can use MarkdownPad on a PC to create initial drafts of documents, and then convert them to Dokuwiki format, to upload to a Dokuwiki installation that I have no control over. (This means that the Markdown plugin is no use to me.) I could spend time writing a Python script to do the conversion myself, but I'd like to avoid spending time on this, if such a thing exists already. The Markdown tags I'd like to have supported/converted are: Heading levels 1 - 5 Bold, italic, underline, fixed width font Numbered and unnumbered lists Hyperlinks Horizontal rules Does such a tool exist, or is there a good starting point available? Things I've found and considered I initially thought that txt2tags would be helpful, but although it can write both markdown and Dokuwiki, it is very tied to its own specific input format I've also seen Markdown2Dokuwiki, and although I'd certainly be willing to use a sed script, even on a PC, this only supports a tiny, tiny part of Markdown's syntax. python-markdown2 also sounded promising, but it only writes out HTML. pandoc - but it doesn't support Dokuwiki output MultiMarkdown - does not appear to support Dokuwiki output

    Read the article

  • How to import a text file into powershell and email it, formatted as HTML

    - by Don
    I'm trying to get a list of all Exchange accounts, format them in descending order from largest mailbox and put that data into an email in HTML format to email to myself. So far I can get the data, push it to a text file as well as create an email and send to myself. I just can't seem to get it all put together. I've been trying to use ConvertTo-Html but it just seems to return data via email like "pageFooterEntry" and "Microsoft.PowerShell.Commands.Internal.Format.AutosizeInfo" versus the actual data. I can get it to send me the right data if i don't tell it to ConvertTo-Html, just have it pipe the data to a text file and pull from it, but it's all ran together with no formatting. I don't need to save the file, i'd just like to run the command, get the data, put it in HTML and mail it to myself. Here's what I have currently: #Connects to Database and returns information on all users, organized by Total Item Size, User $body = Get-MailboxStatistics -database "Mailbox Database 0846468905" | where {$_.ObjectClass -eq “Mailbox”} | Sort-Object TotalItemSize -Descending | ft @{label=”User”;expression={$_.DisplayName}},@{label=”Total Size (MB)”;expression={$_.TotalItemSize.Value.ToMB()}} -auto | ConvertTo-Html #Pause for 5 seconds for Exchange write-host -foregroundcolor Green "Pausing for 5 seconds for Exchange" Start-Sleep -s 5 $toemail = "[email protected]" # Emails report to this address. $fromemail = "[email protected]" #Emails from this address. $server = "Exchange.company.com" #Exchange server - SMTP. #Email the report. $email = New-Object System.Net.Mail.MailMessage $email.IsBodyHtml = $True $email.To.Add($toemail) $email.From = $fromemail $email.Subject = "Exchange Mailbox Sizes" $email.Body = $body $client = New-Object System.Net.Mail.SmtpClient $server $client.UseDefaultCredentials = $true $client.Send($email) Any thoughts would be helpful, thanks!

    Read the article

  • Windows based development environment: HyperV, VMWare, or VirtualBox on development machine?

    - by bleepzter
    I am a software engineer with a little bit of an informal "support" functionality... I am trying to figure out what is the best possible approach to employing virtualization technologies into our development process. Since the code we develop is server-centric, testing it often requires a VM with specific software requirements. I used to use VM Ware player (free version) to run my VM's until both of my laptops started exhibiting issues with corrupted windows 7 services and dying hard drives. All leads pointed to VMWare, which by the way seems to be a solid product if you pay for the Workstation edition ($300). On a side note, I have always been a fan of the Windows Server product line. I think it makes for one of the best development environments out there - it is highly scalable, highly reliable, and very efficient. So to be fair I replaced the drives of the laptops and installed Windows Server 2008R2, VS2010 Ultimate SP1, SQL Server 2008R2, TFS Server 2010 and all other tools and API's needed do do my work properly. So now I am stuck with a bunch of VMWare VMs. I don't want to repeat of what happened before, and I certainly don't want to bog down my machine with an inefficient hypervisor or services that are not needed. Futhermore the VMDK hard-disk format used by VMWare is not compatible with the VHD format of Hyper V. It is my understanding that converting from one format to the other can only happen by Microsoft System Center Virtual Machine which I have downloaded from MSDN and ready to install. I guess the question at this point is: Does SCVM run as another service in Windows? Is it a memory hog? What is a better virtualization technology - Hyper-V or Virtual Box in terms of efficiency ease of use and most importantly - memory footprint? (Keep in mind the development environment already has a ton of services running such as TFS Server, SQL Server, IIS, etc...) How would you advise to proceed at this point so that the VMs are still used in the test process? Thanks Martin

    Read the article

  • Dump Trac DB on Windows/XAMPP

    - by Whiteknight
    I have a Trac instance running on a WindowsXP machine with XAMPP. I am trying to migrate the trac instance to a newer Linux-based machine. However, I'm having a hard time getting the database to cooperate. I try to dump the db with this command: sqlite3 C:\tracroot\db\trac.db ".dump" >> mysqldump.sql But the generated file is mostly empty: BEGIN TRANSACTION; COMMIT; So that's not right. For the record my trac instance is running now and appears to have full access to all the contents of the DB. But sqlite3 (located in C:\xampp\apache\bin) can't seem to get any information from the file. The DB file itself has the header "SQLite format 3", so that seems to be correct. I need to know one of two things: How to get this dump working OR An alternate way to migrate the Trac database to the new machine. Update: When I try to open the .db file in sqlite3, I get the error Error: unsupported file format. What format is it in, and why is it unsupported?

    Read the article

  • How to merge Windows registry hives directly without converting them to an intermediate text based file?

    - by Registrar
    Help! I'm going to get fired if I can't figure out how to do this by tomorrow. Microsoft Windows stores its registry databases (known as "registry hives" - there's actually a backstory to the origin of this name, but I digress) in a proprietary binary format. Answer this correctly or you lose your job: Let H-sub-A be the registry hive of Computer A, and let H-sub-B be the registry hive of Computer B. Create a registry hive H-sub-A-prime (in the native binary format) that contains all of the registry keys and values in both H-sub-A and H-sub-B. If there is overlap, let the value from H-sub-B overwrite the value in H-sub-A. Sure, you can import a text-based patch file (e.g., "FOO.REG") to modify the registry, but can you merge two registry hives in their native binary format? Answers that involve exporting the registry to a text file (e.g., "FOO.REG") will receive no credit. You may only use software included with Microsoft Windows (any version) and / or third-party tools that are free of charge.

    Read the article

  • Profile System: User share the same id

    - by Malcolm Frexner
    I have a strange effect on my site when it is under heavy load. I randomly get the properties of other users settings. I have my own implementation of the profile system so I guess I can not blame the profile system itself. I just need a point to start debugging from. I guess there is a cookie-value that maps to an Profile entry somewhere. Is there any chance to see how this mapping works? Here is my profile provider: using System; using System.Text; using System.Configuration; using System.Web; using System.Web.Profile; using System.Collections; using System.Collections.Specialized; using B2CShop.Model; using log4net; using System.Collections.Generic; using System.Diagnostics; using B2CShop.DAL; using B2CShop.Model.RepositoryInterfaces; [assembly: log4net.Config.XmlConfigurator()] namespace B2CShop.Profile { public class B2CShopProfileProvider : ProfileProvider { private static readonly ILog _log = LogManager.GetLogger(typeof(B2CShopProfileProvider)); // Get an instance of the Profile DAL using the ProfileDALFactory private static readonly B2CShop.DAL.UserRepository dal = new B2CShop.DAL.UserRepository(); // Private members private const string ERR_INVALID_PARAMETER = "Invalid Profile parameter:"; private const string PROFILE_USER = "User"; private static string applicationName = B2CShop.Model.Configuration.ApplicationConfiguration.MembershipApplicationName; /// <summary> /// The name of the application using the custom profile provider. /// </summary> public override string ApplicationName { get { return applicationName; } set { applicationName = value; } } /// <summary> /// Initializes the provider. /// </summary> /// <param name="name">The friendly name of the provider.</param> /// <param name="config">A collection of the name/value pairs representing the provider-specific attributes specified in the configuration for this provider.</param> public override void Initialize(string name, NameValueCollection config) { if (config == null) throw new ArgumentNullException("config"); if (string.IsNullOrEmpty(config["description"])) { config.Remove("description"); config.Add("description", "B2C Shop Custom Provider"); } if (string.IsNullOrEmpty(name)) name = "b2c_shop"; if (config["applicationName"] != null && !string.IsNullOrEmpty(config["applicationName"].Trim())) applicationName = config["applicationName"]; base.Initialize(name, config); } /// <summary> /// Returns the collection of settings property values for the specified application instance and settings property group. /// </summary> /// <param name="context">A System.Configuration.SettingsContext describing the current application use.</param> /// <param name="collection">A System.Configuration.SettingsPropertyCollection containing the settings property group whose values are to be retrieved.</param> /// <returns>A System.Configuration.SettingsPropertyValueCollection containing the values for the specified settings property group.</returns> public override SettingsPropertyValueCollection GetPropertyValues(SettingsContext context, SettingsPropertyCollection collection) { string username = (string)context["UserName"]; bool isAuthenticated = (bool)context["IsAuthenticated"]; //if (!isAuthenticated) return null; int uniqueID = dal.GetUniqueID(username, isAuthenticated, false, ApplicationName); SettingsPropertyValueCollection svc = new SettingsPropertyValueCollection(); foreach (SettingsProperty prop in collection) { SettingsPropertyValue pv = new SettingsPropertyValue(prop); switch (pv.Property.Name) { case PROFILE_USER: if (!String.IsNullOrEmpty(username)) { pv.PropertyValue = GetUser(uniqueID); } break; default: throw new ApplicationException(ERR_INVALID_PARAMETER + " name."); } svc.Add(pv); } return svc; } /// <summary> /// Sets the values of the specified group of property settings. /// </summary> /// <param name="context">A System.Configuration.SettingsContext describing the current application usage.</param> /// <param name="collection">A System.Configuration.SettingsPropertyValueCollection representing the group of property settings to set.</param> public override void SetPropertyValues(SettingsContext context, SettingsPropertyValueCollection collection) { string username = (string)context["UserName"]; CheckUserName(username); bool isAuthenticated = (bool)context["IsAuthenticated"]; int uniqueID = dal.GetUniqueID(username, isAuthenticated, false, ApplicationName); if (uniqueID == 0) { uniqueID = dal.CreateProfileForUser(username, isAuthenticated, ApplicationName); } foreach (SettingsPropertyValue pv in collection) { if (pv.PropertyValue != null) { switch (pv.Property.Name) { case PROFILE_USER: SetUser(uniqueID, (UserInfo)pv.PropertyValue); break; default: throw new ApplicationException(ERR_INVALID_PARAMETER + " name."); } } } UpdateActivityDates(username, false); } // Profile gettters // Retrieve UserInfo private static UserInfo GetUser(int userID) { return dal.GetUser(userID); } // Update account info private static void SetUser(int uniqueID, UserInfo user) { user.UserID = uniqueID; dal.SetUser(user); } // UpdateActivityDates // Updates the LastActivityDate and LastUpdatedDate values // when profile properties are accessed by the // GetPropertyValues and SetPropertyValues methods. // Passing true as the activityOnly parameter will update // only the LastActivityDate. private static void UpdateActivityDates(string username, bool activityOnly) { dal.UpdateActivityDates(username, activityOnly, applicationName); } /// <summary> /// Deletes profile properties and information for the supplied list of profiles. /// </summary> /// <param name="profiles">A System.Web.Profile.ProfileInfoCollection of information about profiles that are to be deleted.</param> /// <returns>The number of profiles deleted from the data source.</returns> public override int DeleteProfiles(ProfileInfoCollection profiles) { int deleteCount = 0; foreach (ProfileInfo p in profiles) if (DeleteProfile(p.UserName)) deleteCount++; return deleteCount; } /// <summary> /// Deletes profile properties and information for profiles that match the supplied list of user names. /// </summary> /// <param name="usernames">A string array of user names for profiles to be deleted.</param> /// <returns>The number of profiles deleted from the data source.</returns> public override int DeleteProfiles(string[] usernames) { int deleteCount = 0; foreach (string user in usernames) if (DeleteProfile(user)) deleteCount++; return deleteCount; } // DeleteProfile // Deletes profile data from the database for the specified user name. private static bool DeleteProfile(string username) { CheckUserName(username); return dal.DeleteAnonymousProfile(username, applicationName); } // Verifies user name for sise and comma private static void CheckUserName(string userName) { if (string.IsNullOrEmpty(userName) || userName.Length > 256 || userName.IndexOf(",") > 0) throw new ApplicationException(ERR_INVALID_PARAMETER + " user name."); } /// <summary> /// Deletes all user-profile data for profiles in which the last activity date occurred before the specified date. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are deleted.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate value of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <returns>The number of profiles deleted from the data source.</returns> public override int DeleteInactiveProfiles(ProfileAuthenticationOption authenticationOption, DateTime userInactiveSinceDate) { string[] userArray = new string[0]; dal.GetInactiveProfiles((int)authenticationOption, userInactiveSinceDate, ApplicationName).CopyTo(userArray, 0); return DeleteProfiles(userArray); } /// <summary> /// Retrieves profile information for profiles in which the user name matches the specified user names. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="usernameToMatch">The user name to search for.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user-profile information // for profiles where the user name matches the supplied usernameToMatch parameter.</returns> public override ProfileInfoCollection FindProfilesByUserName(ProfileAuthenticationOption authenticationOption, string usernameToMatch, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, usernameToMatch, null, pageIndex, pageSize, out totalRecords); } /// <summary> /// Retrieves profile information for profiles in which the last activity date occurred on or before the specified date and the user name matches the specified user name. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="usernameToMatch">The user name to search for.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate value of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user profile information for inactive profiles where the user name matches the supplied usernameToMatch parameter.</returns> public override ProfileInfoCollection FindInactiveProfilesByUserName(ProfileAuthenticationOption authenticationOption, string usernameToMatch, DateTime userInactiveSinceDate, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, usernameToMatch, userInactiveSinceDate, pageIndex, pageSize, out totalRecords); } /// <summary> /// Retrieves user profile data for all profiles in the data source. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user-profile information for all profiles in the data source.</returns> public override ProfileInfoCollection GetAllProfiles(ProfileAuthenticationOption authenticationOption, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, null, null, pageIndex, pageSize, out totalRecords); } /// <summary> /// Retrieves user-profile data from the data source for profiles in which the last activity date occurred on or before the specified date. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <param name="pageIndex">The index of the page of results to return.</param> /// <param name="pageSize">The size of the page of results to return.</param> /// <param name="totalRecords">When this method returns, contains the total number of profiles.</param> /// <returns>A System.Web.Profile.ProfileInfoCollection containing user-profile information about the inactive profiles.</returns> public override ProfileInfoCollection GetAllInactiveProfiles(ProfileAuthenticationOption authenticationOption, DateTime userInactiveSinceDate, int pageIndex, int pageSize, out int totalRecords) { CheckParameters(pageIndex, pageSize); return GetProfileInfo(authenticationOption, null, userInactiveSinceDate, pageIndex, pageSize, out totalRecords); } /// <summary> /// Returns the number of profiles in which the last activity date occurred on or before the specified date. /// </summary> /// <param name="authenticationOption">One of the System.Web.Profile.ProfileAuthenticationOption values, specifying whether anonymous, authenticated, or both types of profiles are returned.</param> /// <param name="userInactiveSinceDate">A System.DateTime that identifies which user profiles are considered inactive. If the System.Web.Profile.ProfileInfo.LastActivityDate of a user profile occurs on or before this date and time, the profile is considered inactive.</param> /// <returns>The number of profiles in which the last activity date occurred on or before the specified date.</returns> public override int GetNumberOfInactiveProfiles(ProfileAuthenticationOption authenticationOption, DateTime userInactiveSinceDate) { int inactiveProfiles = 0; ProfileInfoCollection profiles = GetProfileInfo(authenticationOption, null, userInactiveSinceDate, 0, 0, out inactiveProfiles); return inactiveProfiles; } //Verifies input parameters for page size and page index. private static void CheckParameters(int pageIndex, int pageSize) { if (pageIndex < 1 || pageSize < 1) throw new ApplicationException(ERR_INVALID_PARAMETER + " page index."); } //GetProfileInfo //Retrieves a count of profiles and creates a //ProfileInfoCollection from the profile data in the //database. Called by GetAllProfiles, GetAllInactiveProfiles, //FindProfilesByUserName, FindInactiveProfilesByUserName, //and GetNumberOfInactiveProfiles. //Specifying a pageIndex of 0 retrieves a count of the results only. private static ProfileInfoCollection GetProfileInfo(ProfileAuthenticationOption authenticationOption, string usernameToMatch, object userInactiveSinceDate, int pageIndex, int pageSize, out int totalRecords) { ProfileInfoCollection profiles = new ProfileInfoCollection(); totalRecords = 0; // Count profiles only. if (pageSize == 0) return profiles; int counter = 0; int startIndex = pageSize * (pageIndex - 1); int endIndex = startIndex + pageSize - 1; DateTime dt = new DateTime(1900, 1, 1); if (userInactiveSinceDate != null) dt = (DateTime)userInactiveSinceDate; /* foreach(CustomProfileInfo profile in dal.GetProfileInfo((int)authenticationOption, usernameToMatch, dt, applicationName, out totalRecords)) { if(counter >= startIndex) { ProfileInfo p = new ProfileInfo(profile.UserName, profile.IsAnonymous, profile.LastActivityDate, profile.LastUpdatedDate, 0); profiles.Add(p); } if(counter >= endIndex) { break; } counter++; } */ return profiles; } } } This is how I use it in the controller: public ActionResult AddTyreToCart(CartViewModel model) { string profile = Request.IsAuthenticated ? Request.AnonymousID : User.Identity.Name; } I would like to debug: How can 2 users who provide different cookies get the same profileid? EDIT Here is the code for getuniqueid public int GetUniqueID(string userName, bool isAuthenticated, bool ignoreAuthenticationType, string appName) { SqlParameter[] parms = { new SqlParameter("@Username", SqlDbType.VarChar, 256), new SqlParameter("@ApplicationName", SqlDbType.VarChar, 256)}; parms[0].Value = userName; parms[1].Value = appName; if (!ignoreAuthenticationType) { Array.Resize(ref parms, parms.Length + 1); parms[2] = new SqlParameter("@IsAnonymous", SqlDbType.Bit) { Value = !isAuthenticated }; } int userID; object retVal = null; retVal = SqlHelper.ExecuteScalar(ConfigurationManager.ConnectionStrings["SQLOrderB2CConnString"].ConnectionString, CommandType.StoredProcedure, "getProfileUniqueID", parms); if (retVal == null) userID = CreateProfileForUser(userName, isAuthenticated, appName); else userID = Convert.ToInt32(retVal); return userID; } And this is the SP: CREATE PROCEDURE [dbo].[getProfileUniqueID] @Username VarChar( 256), @ApplicationName VarChar( 256), @IsAnonymous bit = null AS BEGIN SET NOCOUNT ON; /* [getProfileUniqueID] created 08.07.2009 mf Retrive unique id for current user */ SELECT UniqueID FROM dbo.Profiles WHERE Username = @Username AND ApplicationName = @ApplicationName AND IsAnonymous = @IsAnonymous or @IsAnonymous = null END

    Read the article

  • Nullpointerexcption & abrupt IOStream closure with inheritence and subclasses

    - by user1401652
    A brief background before so we can communicate on the same wave length. I've had about 8-10 university courses on programming from data structure, to one on all languages, to specific ones such as java & c++. I'm a bit rusty because i usually take 2-3 month breaks from coding. This is a personal project that I started thinking of two years back. Okay down to the details, and a specific question, I'm having problems with my mutator functions. It seems to be that I am trying to access a private variable incorrectly. The question is, am I nesting my classes too much and trying to mutate a base class variable the incorrect way. If so point me in the way of the correct literature, or confirm this is my problem so I can restudy this information. Thanks package GroceryReceiptProgram; import java.io.*; import java.util.Vector; public class Date { private int hour, minute, day, month, year; Date() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What's the hour? (Use 1-24 military notation"); hour = Integer.parseInt(keyboard.readLine()); System.out.println("what's the minute? "); minute = Integer.parseInt(keyboard.readLine()); System.out.println("What's the day of the month?"); day = Integer.parseInt(keyboard.readLine()); System.out.println("Which month of the year is it, use an integer"); month = Integer.parseInt(keyboard.readLine()); System.out.println("What year is it?"); year = Integer.parseInt(keyboard.readLine()); keyboard.close(); } catch (IOException e) { System.out.println("Yo houston we have a problem"); } } public void setHour(int hour) { this.hour = hour; } public void setHour() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What hour, use military notation?"); this.hour = Integer.parseInt(keyboard.readLine()); keyboard.close(); } catch (NumberFormatException e) { System.out.println(e.toString() + ":doesnt seem to be a number"); } catch (IOException e) { System.out.println(e.toString()); } } public int getHour() { return hour; } public void setMinute(int minute) { this.minute = minute; } public void setMinute() { try (BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in))) { System.out.println("What minute?"); this.minute = Integer.parseInt(keyboard.readLine()); } catch (NumberFormatException e) { System.out.println(e.toString() + ": doesnt seem to be a number"); } catch (IOException e) { System.out.println(e.toString() + ": minute shall not cooperate"); } catch (NullPointerException e) { System.out.println(e.toString() + ": in the setMinute function of the Date class"); } } public int getMinute() { return minute; } public void setDay(int day) { this.day = day; } public void setDay() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What day 0-6?"); this.day = Integer.parseInt(keyboard.readLine()); keyboard.close(); } catch (NumberFormatException e) { System.out.println(e.toString() + ":doesnt seem to be a number"); } catch (IOException e) { System.out.println(e.toString()); } } public int getDay() { return day; } public void setMonth(int month) { this.month = month; } public void setMonth() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What month 0-11?"); this.month = Integer.parseInt(keyboard.readLine()); keyboard.close(); } catch (NumberFormatException e) { System.out.println(e.toString() + ":doesnt seem to be a number"); } catch (IOException e) { System.out.println(e.toString()); } } public int getMonth() { return month; } public void setYear(int year) { this.year = year; } public void setYear() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What year?"); this.year = Integer.parseInt(keyboard.readLine()); keyboard.close(); } catch (NumberFormatException e) { System.out.println(e.toString() + ":doesnt seem to be a number"); } catch (IOException e) { System.out.println(e.toString()); } } public int getYear() { return year; } public void set() { setMinute(); setHour(); setDay(); setMonth(); setYear(); } public Vector<Integer> get() { Vector<Integer> holder = new Vector<Integer>(5); holder.add(hour); holder.add(minute); holder.add(month); holder.add(day); holder.add(year); return holder; } }; That is the Date class obviously, next is the other base class Location. package GroceryReceiptProgram; import java.io.*; import java.util.Vector; public class Location { String streetName, state, city, country; int zipCode, address; Location() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What is the street name"); streetName = keyboard.readLine(); System.out.println("Which state?"); state = keyboard.readLine(); System.out.println("Which city?"); city = keyboard.readLine(); System.out.println("Which country?"); country = keyboard.readLine(); System.out.println("Which zipcode?");//if not u.s. continue around this step zipCode = Integer.parseInt(keyboard.readLine()); System.out.println("What address?"); address = Integer.parseInt(keyboard.readLine()); } catch (IOException e) { System.out.println(e.toString()); } } public void setZipCode(int zipCode) { this.zipCode = zipCode; } public void setZipCode() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What zipCode?"); this.zipCode = Integer.parseInt(keyboard.readLine()); keyboard.close(); } catch (NumberFormatException e) { System.out.println(e.toString() + ":doesnt seem to be a number"); } catch (IOException e) { System.out.println(e.toString()); } } public void set() { setAddress(); setCity(); setCountry(); setState(); setStreetName(); setZipCode(); } public int getZipCode() { return zipCode; } public void setAddress(int address) { this.address = address; } public void setAddress() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What minute?"); this.address = Integer.parseInt(keyboard.readLine()); keyboard.close(); } catch (NumberFormatException e) { System.out.println(e.toString() + ":doesnt seem to be a number"); } catch (IOException e) { System.out.println(e.toString()); } } public int getAddress() { return address; } public void setStreetName(String streetName) { this.streetName = streetName; } public void setStreetName() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What minute?"); this.streetName = keyboard.readLine(); keyboard.close(); } catch (IOException e) { System.out.println(e.toString()); } } public String getStreetName() { return streetName; } public void setState(String state) { this.state = state; } public void setState() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What minute?"); this.state = keyboard.readLine(); keyboard.close(); } catch (IOException e) { System.out.println(e.toString()); } } public String getState() { return state; } public void setCity(String city) { this.city = city; } public void setCity() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What minute?"); this.city = keyboard.readLine(); keyboard.close(); } catch (IOException e) { System.out.println(e.toString()); } } public String getCity() { return city; } public void setCountry(String country) { this.country = country; } public void setCountry() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What minute?"); this.country = keyboard.readLine(); keyboard.close(); } catch (IOException e) { System.out.println(e.toString()); } } public String getCountry() { return country; } }; their parent(What is the proper name?) class package GroceryReceiptProgram; import java.io.*; public class FoodGroup { private int price, count; private Date purchaseDate, expirationDate; private Location location; private String name; public FoodGroup() { try { setPrice(); setCount(); expirationDate.set(); purchaseDate.set(); location.set(); } catch (NullPointerException e) { System.out.println(e.toString() + ": in the constructor of the FoodGroup class"); } } public void setPrice(int price) { this.price = price; } public void setPrice() { try (BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in))) { System.out.println("What Price?"); price = Integer.parseInt(keyboard.readLine()); } catch (NumberFormatException e) { System.out.println(e.toString() + ":doesnt seem to be a number"); } catch (IOException e) { System.out.println(e.toString() + ": in the FoodGroup class, setPrice function"); } catch (NullPointerException e) { System.out.println(e.toString() + ": in FoodGroup class. SetPrice()"); } } public int getPrice() { return price; } public void setCount(int count) { this.count = count; } public void setCount() { try (BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in))) { System.out.println("What count?"); count = Integer.parseInt(keyboard.readLine()); } catch (NumberFormatException e) { System.out.println(e.toString() + ":doesnt seem to be a number"); } catch (IOException e) { System.out.println(e.toString() + ": in the FoodGroup class, setCount()"); } catch (NullPointerException e) { System.out.println(e.toString() + ": in FoodGroup class, setCount"); } } public int getCount() { return count; } public void setName(String name) { this.name = name; } public void setName() { try { BufferedReader keyboard = new BufferedReader(new InputStreamReader(System.in)); System.out.println("What minute?"); this.name = keyboard.readLine(); } catch (IOException e) { System.out.println(e.toString()); } } public String getName() { return name; } public void setLocation(Location location) { this.location = location; } public Location getLocation() { return location; } public void setPurchaseDate(Date purchaseDate) { this.purchaseDate = purchaseDate; } public void setPurchaseDate() { this.purchaseDate.set(); } public Date getPurchaseDate() { return purchaseDate; } public void setExpirationDate(Date expirationDate) { this.expirationDate = expirationDate; } public void setExpirationDate() { this.expirationDate.set(); } public Date getExpirationDate() { return expirationDate; } } and finally the main class, so I can get access to all of this work. package GroceryReceiptProgram; public class NewMain { public static void main(String[] args) { FoodGroup test = new FoodGroup(); } } If anyone is further interested, here is a link the UML for this. https://www.dropbox.com/s/1weigjnxih70tbv/GRP.dia

    Read the article

  • Oracle Internet Directory 11gR1 11.1.1.6 Certified with E-Business Suite

    - by Elke Phelps (Oracle Development)
    Oracle E-Business Suite comes with native user authentication and management capabilities out-of-the-box. If you need more-advanced features, it's also possible to integrate it with Oracle Internet Directory and Oracle Single Sign-On or Oracle Access Manager, which allows you to link the E-Business Suite with third-party tools like Microsoft Active Directory, Windows Kerberos, and CA Netegrity SiteMinder.  For details about third-party integration architectures, see either of these article for EBS 11i and 12: In-Depth: Using Third-Party Identity Managers with E-Business Suite Release 12 In-Depth: Using Third-Party Identity Managers with the E-Business Suite Release 11i Oracle Internet Directory 11.1.1.6 is now certified with Oracle E-Business Suite Release 11i, 12.0 and 12.1.  OID 11.1.1.6 is part of Oracle Fusion Middleware 11g Release 1 Version 11.1.1.6.0, also known as FMW 11g Patchset 5.  Certified E-Business Suite releases are: EBS Release 11i 11.5.10.2 + ATG PH.H RUP 7 and higher EBS Release 12.0.6 and higher EBS Release 12.1.1 and higher Supported Configurations Oracle Internet Directory 11.1.1.5.0 can be integrated with two single sign-on solutions for EBS environments: Oracle Internet Directory and Directory Integration Platform from Fusion Middleware 11gR1 Patchset 5 (11.1.1.6.0) with Oracle Access Manager 10g (10.1.4.3) with an existing Oracle E-Business Suite system (Release 11i or 12.1.x). Oracle Internet Directory and Directory Integration Platform from Fusion Middleware 11gR1 Patchset 5 (11.1.1.6.0) with Oracle Access Manager 11gR1 (11.1.1.5) with an existing Oracle E-Business Suite system (Release 12.0.6 or higher or 12.1.x). Oracle Internet Directory (OID) and Directory Integration Platform (DIP) from Oracle Fusion Middleware 11gR1 Patchset 5  (11.1.1.6.0) with Oracle Single Sign-On Server and Oracle Delegated Administration Services Release 10g (10.1.4.3.0) with an existing Oracle E-Business Suite system (Release 11i, 12.0.6 or 12.1.x) Oracle Access Manager strongly recommended Oracle has two single sign-on solutions: Oracle Single Sign-On Server (OSSO) and Oracle Access Manager (OAM). Oracle strongly recommends that all new single sign-on implementations use Oracle Access Manager. Oracle Access Manager is the preferred solution going forward, and forms the basis of Oracle Fusion Middleware 11g. OSSO is no longer being actively developed and will not be ported to Oracle WebLogic Server. Platform certifications Oracle Internet Directory is certified to run on any operating system for which Oracle WebLogic Server 11g is certified. Refer to the Oracle Fusion Middleware 11g System Requirements for more details.For information on operating systems supported by Oracle Internet Directory and its components, refer to the Oracle Identity and Access Management 11gR1 certification matrix.Integration with Oracle Internet Directory involves components spanning several different suites of Oracle products. There are no restrictions on which platform any particular component may be installed so long as the platform is supported for that component.References Overview of Single Sign-On Integration Options for Oracle E-Business Suite Note 1388152.1 Using the Latest Oracle Internet Directory 11gR1 Patchset with Oracle Single Sign-on and Oracle E-Business Suite (Note 876539.1) Integrating Oracle E-Business Suite with Oracle Access Manager 11g using Oracle E-Business Suite AccessGate (Note 1309013.1) Integrating Oracle E-Business Suite with Oracle Access Manager 10g using Oracle E-Business Suite AccessGate (Note 975182.1) Migrating Oracle Single Sign-On 10gR3 to Oracle Access Manager 11g with Oracle E-Business Suite (Note 1304550.1) Oracle Fusion Middleware Download, Installation & Configuration Readme Oracle Fusion Middleware Installation Guide for Oracle Identity Management 11g Release 1 (11.1.1) (Part Number E12002-09) Oracle Fusion Middleware Upgrade Guide for Oracle Identity Management 11g Release 1 (11.1.1) (Part Number E10129-09) Oracle Fusion Middleware Upgrade Planning Guide 11g Release 1 (11.1.1) (Part Number E10125-06) Oracle Fusion Middleware Patching Guide 11g Release 1 (11.1.1) (Part Number E16793-12) Related Articles Understanding Options for Integrating Oracle Access Manager with E-Business Suite In-Depth: Using Third-Party Identity Managers with E-Business Suite Release 12 In-Depth: Using Third-Party Identity Managers with the E-Business Suite Release 11i Oracle Access Manager 10gR3 Certified with E-Business Suite Portal 11.1.1.4 Certified with E-Business Suite Discoverer 11.1.1.4 Certified with E-Business Suite

    Read the article

  • Web Services for Info Explorer Zones

    - by Anthony Shorten
    One of the most interesting uses for XAI and Configurable objects is the exposure of a query portal as a Web Service. Let me illustrate this with an example. Say you have an interface that requires a list of data from a number of product tables. In the past you would have to build a java program to do this with SQL then use an application service but it is now possible with just configuration. The first step in the process is to create the SQL you want to use for the interface. It can be any valid static SQL or use host variables for the WHERE clause (we call that filtered). Once you are happy with the SQL (and it performs acceptably) you can incorporate that SQL into a Info-Explorer Zone. You can use any of the explorer zone types but I typically recommend F1-DE-SINGLE as it supports a single SQL statement with multiple filters (up to 15) as well as hidden filters (up to 5). Hidden filters are typically not displayed in the UI for criteria (remember explorer zones can be used on the user Interface as well) but for web services they can be used as normal filters (this means you can use up to 20 filters all up). Once you are happy with the zone, you now need to define it as a Business Service. We have a generic service called FWLZDEXP which allows a explorer zone to be defined as a Business Service. If you open any Business Service based upon FWLZDEXP you will see some examples. The schema is standard and pretty self explanatory in terms of the structure. The schema pattern looks like this: Zone element - maps to the ZONE_CD element and the default value is the zone name you just created. This links the business service to the zone. Filter elements - You name the filters as you like but the mapField is set to Fx_VALUE where x is the filter number corresponding to the filter element in the zone definition. Hidden filter elements - You name the filters as you like but the mapField is set to Hx_VALUE where x is the filter number corresponding to the hidden filter element in the zone definition. results group - this holds the elements of the result set. Each element in your result set has a tagname and is linked to the COL_VALUE mapField and the row element is lists the SEQNO of the column. This corresponds to the column number in the results set in the zone. An example schema is shown below for the F1-USGRACML zone, which returns the access modes for a user group and application service filters. In the example, the userGroup and applicationService elements are the filters and the rows would contain a list of accessModeDescr. This is just a simple example to illustrate the point. There are lots of examples in the product that you can investigate. One recommendation, to save time, is that you copy the schema from one of the examples to save you typing it from scratch. You can simply modify the tags and other elements to suit your needs. Once the Business Service is defined it can simply be defined as a Web Service by registering an XAI Inbound Service using the Business Service definition as a basis. You now have a Web Service based upon a Info Explorer Zone. This is one of my favorite components as it allows interfaces to be simplified. This will be my last blog entry for this year. I hope you all have a great and safe Christmas and an even greater new year. Next year promises to be an exciting year and I look forward to communicating exciting developments we are working on at the moment as they are released.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 22 (sys.dm_db_index_physical_stats)

    - by Tamarick Hill
    The sys.dm_db_index_physical_stats Dynamic Management Function is used to return information about the fragmentation levels, page counts, depth, number of levels, record counts, etc. about the indexes on your database instance. One row is returned for each level in a given index, which we will discuss more later. The function takes a total of 5 input parameters which are (1) database_id, (2) object_id, (3) index_id, (4) partition_number, and (5) the mode of the scan level that you would like to run. Let’s use this function with our AdventureWorks2012 database to better illustrate the information it provides. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, NULL) As you can see from the result set, there is a lot of beneficial information returned from this DMF. The first couple of columns in the result set (database_id, object_id, index_id, partition_number, index_type_desc, alloc_unit_type_desc) are either self-explanatory or have been explained in our previous blog sessions so I will not go into detail about these at this time. The next column in the result set is the index_depth which represents how deep the index goes. For example, If we have a large index that contains 1 root page, 3 intermediate levels, and 1 leaf level, our index depth would be 5. The next column is the index_level which refers to what level (of the depth) a particular row is referring to. Next is probably one of the most beneficial columns in this result set, which is the avg_fragmentation_in_percent. This column shows you how fragmented a particular level of an index may be. Many people use this column within their index maintenance jobs to dynamically determine whether they should do REORG’s or full REBUILD’s of a given index. The fragment count represents the number of fragments in a leaf level while the avg_fragment_size_in_pages represents the number of pages in a fragment. The page_count column tells you how many pages are in a particular index level. From my result set above, you see the the remaining columns all have NULL values. This is because I did not specify a ‘mode’ in my query and as a result it used the ‘LIMITED’ mode by default. The LIMITED mode is meant to be lightweight so it does collect information for every column in the result set. I will re-run my query again using the ‘DETAILED’ mode and you will see we now have results for these rows. SELECT * FROM sys.dm_db_index_physical_stats(db_id('AdventureWorks2012'), NULL, NULL, NULL, ‘DETAILED’)   From the remaining columns, you see we get even more detailed information such as how many records are in a particular index level (record_count). We have a column for ghost_record_count which represents the number of records that have been marked for deletion, but have not physically been removed by the background ghost cleanup process. We later see information on the MIN, MAX, and AVG record size in bytes. The forwarded_record_count column refers to records that have been updated and now no longer fit within the row on the page anymore and thus have to be moved. A forwarded record is left in the original location with a pointer to the new location. The last column in the result set is the compressed_page_count column which tells you how many pages in your index have been compressed. This is a very powerful DMF that returns good information about the current indexes in your system. However, based on the mode you select, it could be a very resource intensive function so be careful with how you use it. For more information on this Dynamic Management Function, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms188917.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • SQL SERVER – CXPACKET – Parallelism – Advanced Solution – Wait Type – Day 7 of 28

    - by pinaldave
    Earlier we discussed about the what is the common solution to solve the issue with CXPACKET wait time. Today I am going to talk about few of the other suggestions which can help to reduce the CXPACKET wait. If you are going to suggest that I should focus on MAXDOP and COST THRESHOLD – I totally agree. I have covered them in details in yesterday’s blog post. Today we are going to discuss few other way CXPACKET can be reduced. Potential Reasons: If data is heavily skewed, there are chances that query optimizer may estimate the correct amount of the data leading to assign fewer thread to query. This can easily lead to uneven workload on threads and may create CXPAKCET wait. While retrieving the data one of the thread face IO, Memory or CPU bottleneck and have to wait to get those resources to execute its tasks, may create CXPACKET wait as well. Data which is retrieved is on different speed IO Subsystem. (This is not common and hardly possible but there are chances). Higher fragmentations in some area of the table can lead less data per page. This may lead to CXPACKET wait. As I said the reasons here mentioned are not the major cause of the CXPACKET wait but any kind of scenario can create the probable wait time. Best Practices to Reduce CXPACKET wait: Refer earlier article regarding MAXDOP and Cost Threshold. De-fragmentation of Index can help as more data can be obtained per page. (Assuming close to 100 fill-factor) If data is on multiple files which are on multiple similar speed physical drive, the CXPACKET wait may reduce. Keep the statistics updated, as this will give better estimate to query optimizer when assigning threads and dividing the data among available threads. Updating statistics can significantly improve the strength of the query optimizer to render proper execution plan. This may overall affect the parallelism process in positive way. Bad Practice: In one of the recent consultancy project, when I was called in I noticed that one of the ‘experienced’ DBA noticed higher CXPACKET wait and to reduce them, he has increased the worker threads. The reality was increasing worker thread has lead to many other issues. With more number of the threads, more amount of memory was used leading memory pressure. As there were more threads CPU scheduler faced higher ‘Context Switching’ leading further degrading performance. When I explained all these to ‘experienced’ DBA he suggested that now we should reduce the number of threads. Not really! Lower number of the threads may create heavy stalling for parallel queries. I suggest NOT to touch the setting of number of the threads when dealing with CXPACKET wait. Read all the post in the Wait Types and Queue series. Note: The information presented here is from my experience and I no way claim it to be accurate. I suggest reading book on-line for further clarification. All the discussion of Wait Stats over here is generic and it varies by system to system. You are recommended to test this on development server before implementing to production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: DMV, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • Validating Petabytes of Data with Regularity and Thoroughness

    - by rickramsey
    by Brian Zents When former Intel CEO Andy Grove said “only the paranoid survive,” he wasn’t necessarily talking about tape storage administrators, but it’s a lesson they’ve learned well. After all, tape storage is the last line of defense to prevent data loss, so tape administrators are extra cautious in making sure their data is secure. Not surprisingly, we are often asked for ways to validate tape media and the files on them. In the past, an administrator could validate the media, but doing so was often tedious or disruptive or both. The debut of the Data Integrity Validation (DIV) and Library Media Validation (LMV) features in the Oracle T10000C drive helped eliminate many of these pains. Also available with the Oracle T10000D drive, these features use hardware-assisted CRC checks that not only ensure the data is written correctly the first time, but also do so much more efficiently. Traditionally, a CRC check takes at least 25 seconds per 4GB file with a 2:1 compression ratio, but the T10000C/D drives can reduce the check to a maximum of nine seconds because the entire check is contained within the drive. No data needs to be sent to a host application. A time savings of at least 64 percent is extremely beneficial over the course of checking an entire 8.5TB T10000D tape. While the DIV and LMV features are better than anything else out there, what storage administrators really need is a way to check petabytes of data with regularity and thoroughness. With the launch of Oracle StorageTek Tape Analytics (STA) 2.0 in April, there is finally a solution that addresses this longstanding need. STA bundles these features into one interface to automate all media validation activities across all Oracle SL3000 and SL8500 tape libraries in an environment. And best of all, the validation process can be associated with the health checks an administrator would be doing already through STA. In fact, STA validates the media based on any of the following policies: Random Selection – Randomly selects media for validation whenever a validation drive in the standalone library or library complex is available. Media Health = Action – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Action.” You can specify from one to five exchanges. Media Health = Evaluate – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Evaluate.” You can specify from one to five exchanges. Media Health = Monitor – Selects media that have had a specified number of successive exchanges resulting in an Exchange Media Health of “Monitor.” You can specify from one to five exchanges. Extended Period of Non-Use – Selects media that have not had an exchange for a specified number of days. You can specify from 365 to 1,095 days (one to three years). Newly Entered – Selects media that have recently been entered into the library. Bad MIR Detected – Selects media with an exchange resulting in a “Bad MIR Detected” error. A bad media information record (MIR) indicates degraded high-speed access on the media. To avoid disrupting host operations, an administrator designates certain drives for media validation operations. If a host requests a file from media currently being validated, the host’s request takes priority. To ensure that the administrator really knows it is the media that is bad, as opposed to the drive, STA includes drive calibration and qualification features. In addition, validation requests can be re-prioritized or cancelled as needed. To ensure that a specific tape isn’t validated too often, STA prevents a tape from being validated twice within 24 hours via one of the policies described above. A tape can be validated more often if the administrator manually initiates the validation. When the validations are complete, STA reports the results. STA does not report simply a “good” or “bad” status. It also reports if media is even degraded so the administrator can migrate the data before there is a true failure. From that point, the administrators’ paranoia is relieved, as they have the necessary information to make a sound decision about the health of the tapes in their environment. About the Photograph Photograph taken by Rick Ramsey in Death Valley, California, May 2014 - Brian Follow OTN Garage on: Web | Facebook | Twitter | YouTube

    Read the article

  • Tile engine Texture not updating when numbers in array change

    - by Corey
    I draw my map from a txt file. I am using java with slick2d library. When I print the array the number changes in the array, but the texture doesn't change. public class Tiles { public Image[] tiles = new Image[5]; public int[][] map = new int[64][64]; public Image grass, dirt, fence, mound; private SpriteSheet tileSheet; public int tileWidth = 32; public int tileHeight = 32; public void init() throws IOException, SlickException { tileSheet = new SpriteSheet("assets/tiles.png", tileWidth, tileHeight); grass = tileSheet.getSprite(0, 0); dirt = tileSheet.getSprite(7, 7); fence = tileSheet.getSprite(2, 0); mound = tileSheet.getSprite(2, 6); tiles[0] = grass; tiles[1] = dirt; tiles[2] = fence; tiles[3] = mound; int x=0, y=0; BufferedReader in = new BufferedReader(new FileReader("assets/map.dat")); String line; while ((line = in.readLine()) != null) { String[] values = line.split(","); x = 0; for (String str : values) { int str_int = Integer.parseInt(str); map[x][y]=str_int; //System.out.print(map[x][y] + " "); x++; } //System.out.println(""); y++; } in.close(); } public void update(GameContainer gc) { } public void render(GameContainer gc) { for(int y = 0; y < map.length; y++) { for(int x = 0; x < map[0].length; x ++) { int textureIndex = map[x][y]; Image texture = tiles[textureIndex]; texture.draw(x*tileWidth,y*tileHeight); } } } } Mouse Picking Where I change the number in the array Input input = gc.getInput(); gc.getInput().setOffset(cameraX-400, cameraY-300); float mouseX = input.getMouseX(); float mouseY = input.getMouseY(); double mousetileX = Math.floor((double)mouseX/tiles.tileWidth); double mousetileY = Math.floor((double)mouseY/tiles.tileHeight); double playertileX = Math.floor(playerX/tiles.tileWidth); double playertileY = Math.floor(playerY/tiles.tileHeight); double lengthX = Math.abs((float)playertileX - mousetileX); double lengthY = Math.abs((float)playertileY - mousetileY); double distance = Math.sqrt((lengthX*lengthX)+(lengthY*lengthY)); if(input.isMousePressed(Input.MOUSE_LEFT_BUTTON) && distance < 4) { System.out.println("Clicked"); if(tiles.map[(int)mousetileX][(int)mousetileY] == 1) { tiles.map[(int)mousetileX][(int)mousetileY] = 0; } } I never ask a question until I have tried to figure it out myself. I have been stuck with this problem for two weeks. It's not like this site is made for asking questions or anything. So if you actually try to help me instead of telling me to use a debugger thank you. You either get told you have too much or too little code. Nothing is never enough for the people on here it's as bad as something like reddit. Idk what is wrong all my textures work when I render them it just doesn't update when the number in the array changes. I am obviously debugging when I say that I was printing the array and the number is changing like it should, so it's not a problem with my mouse picking code. It is a problem with my textures, but I don't know what because they all render correctly. That is why I need help.

    Read the article

  • Blueprints for Oracle NoSQL Database

    - by dan.mcclary
    I think that some of the most interesting analytic problems are graph problems.  I'm always interested in new ways to store and access graphs.  As such, I really like the work being done by Tinkerpop to create Open Source Software to make property graphs more accessible over a wide variety of datastores.  Since key-value stores like Oracle NoSQL Database are well-suited to storing property graphs, I decided to extend the Blueprints API to work with it.  Below I'll discuss some of the implementation details, but you can check out the finished product here: http://github.com/dwmclary/blueprints-oracle-nosqldb.  What's in a Property Graph?  In the most general sense, a graph is just a collection of vertices and edges.  Vertices and edges can have properties: weights, names, or any number of other traits.  In an undirected graph, edges connect vertices without direction.  A directed graph specifies that all edges have a head and a tail --- a direction.  A multi-graph allows multiple edges to connect two vertices.  A "property graph" encompasses all of these traits. Key-Value Stores for Property Graphs Key-Value stores like Oracle NoSQL Database tend to be ideal for implementing property graphs.  First, if any vertex or edge can have any number of traits, we can treat it as a hash map.  For example: Vertex["name"] = "Mary" Vertex["age"] = 28 Vertex["ID"] = 12345  and so on.  This is a natural key-value relationship: the key "name" maps to the value "Mary."  Moreover if we maintain two hash maps, one for vertex objects and one for edge objects, we've essentially captured the graph.  As such, any scalable key-value store is fertile ground for planting graphs. Oracle NoSQL Database as a Scalable Graph Database While Oracle NoSQL Database offers useful features like tunable consistency, what lends it to storing property graphs is the storage guarantees around its key structure.  Keys in Oracle NoSQL Database are divided into two parts: a major key and a minor key.  The storage guarantee is simple.  Major keys will be distributed across storage nodes, which could encompass a large number of servers.  However, all minor keys which are children of a given major key are guaranteed to be stored on the same storage node.  For example, the vertices: /Personnel/Vertex/1  and /Personnel/Vertex/2 May be stored on different servers, but /Personnel/Vertex/1-/name and  /Personnel/Vertex/1-/age will always be on the same server.  This means that we can structure our graph database such that retrieving all the properties for a vertex or edge requires I/O from only a single storage node.  Moreover, Oracle NoSQL Database provides a storeIterator which allows us to store a huge number of vertices and edges in a scalable fashion.  By storing the vertices and edges as major keys, we guarantee that they are distributed evenly across all storage nodes.  At the same time we can use a partial major key to iterate over all the vertices or edges (e.g. we search over /Personnel/Vertex to iterate over all vertices). Fork It! The Blueprints API and Oracle NoSQL Database present a great way to get started using a scalable key-value database to store and access graph data.  However, a graph store isn't useful without a good graph to work on.  I encourage you to fork or pull the repository, store some data, and try using Gremlin or any other language to explore.

    Read the article

  • App Stores&ndash;In All Things, Its Quality Over Quantity

    - by D'Arcy Lussier
    Everybody has an opinion about Windows 8. People love it, people hate it, people are meh about it, people are apparently buying it from Microsoft stores in NYC as if it was water before a natural disaster…if there’s one thing that Microsoft product launches do well, its the ability to bring out strong emotional responses. Over at eweek.com, Don Reisinger wrote about 5 good and bad things about Windows 8. Yes, another opinion piece on WIndows 8. I figured since this one had good and bad it might be worthwhile to read. I then came across #10 on his list, and figured “What the hell…might as well post a bit of a rant on Windows 8 myself!” Here’s #10: 10. Bad: Too few apps Unfortunately, Microsoft wasn’t able to get too many developers to start producing applications for its Windows 8 Store. Microsoft hasn’t yet released official numbers, but some have said that the marketplace has less than 8,000 programs. Considering Apple’s App Store has 100 times that, it’s about time Microsoft starts leaning on developers to get more programs into its store. Believe me, Microsoft *has* been leaning on developers to get apps into the store. I’ve been asked at least 5 or 6 times from 5 or 6 different friends at Microsoft about whether I was going to write a Windows 8 app. I think Microsoft felt they had to try and address the number of apps available in their marketplace, since some people (like Don) would draw comparisons to the number of apps in the Apple marketplace. I feel for Microsoft in this, since the number of apps in a marketplace are an empty stat. Quality of Quantity I have an iPad that my family (wife, 10yo daughter, 3yo daughter) use. We all have our own apps installed on it. In addition, my wife has an iPhone 4S that she also installs apps on. As someone who gets asked by his kids often whether they can buy/download an app, the vast majority of the vast catalogue of iOS marketplace apps are crap! Do you realize how many “free” games are out there, only to really be not-free because you have to purchase in-game content to make the game actually playable? And how about searching – with such a vast array of apps and such high numbers of craptastic ones, trying to find something is incredibly difficult and can be frustrating. I would rather see that Microsoft has 8000 high quality apps in their store at launch, instead of 800000 that were mostly junk. Too Few Apps?! And seriously, 8000 is not a small number. How many iOS apps have I actually bought between the iPad and iPhone? I’ll be generous and say 30…heck, let’s round it up to 40. It’s not like I have 10,000 apps installed on my iPad, nor will that ever happen! So if people have, at the *launch* of a new platform ecosystem, EIGHT THOUSAND apps to choose from, I don’t see that as a fail at all! It should be noted that most of the most common apps (Netflix, Skype, etc.) are available for Windows 8 at launch – I guess I’ll have to wait a few weeks for My Pony Ranch and all its clones to start showing up; pity. Let’s Check Back in a Year So look, let’s check back in a year’s time and see what the app store looks like. My hope is that Microsoft doesn’t continue to push quantity over quality. Even knowing the optics that # of apps in the store carries and the pressure to catch Apple and Android marketplaces, I hope Microsoft avoids the scenario where there’s a good percentage of apps in the Windows Store that are utter rubbish and finding the gems will be cumbersome. But if that happens, we can thank guys like Dan who raised the false issue of app count at the launch for it.

    Read the article

  • Prefilling an SMS on Mobile Devices with the sms: Uri Scheme

    - by Rick Strahl
    Popping up the native SMS app from a mobile HTML Web page is a nice feature that allows you to pre-fill info into a text for sending by a user of your mobile site. The syntax is a bit tricky due to some device inconsistencies (and quite a bit of wrong/incomplete info on the Web), but it's definitely something that's fairly easy to do.In one of my Mobile HTML Web apps I need to share a current location via SMS. While browsing around a page I want to select a geo location, then share it by texting it to somebody. Basically I want to pre-fill an SMS message with some text, but no name or number, which instead will be filled in by the user.What worksThe syntax that seems to work fairly consistently except for iOS is this:sms:phonenumber?body=messageFor iOS instead of the ? use a ';' (because Apple is always right, standards be damned, right?):sms:phonenumber;body=messageand that works to pop up a new SMS message on the mobile device. I've only marginally tested this with a few devices: an iPhone running iOS 6, an iPad running iOS 7, Windows Phone 8 and a Nexus S in the Android Emulator. All four devices managed to pop up the SMS with the data prefilled.You can use this in a link:<a href="sms:1-111-1111;body=I made it!">Send location via SMS</a>or you can set it on the window.location in JavaScript:window.location ="sms:1-111-1111;body=" + encodeURIComponent("I made it!");to make the window pop up directly from code. Notice that the content should be URL encoded - HTML links automatically encode, but when you assign the URL directly in code the text value should be encoded.Body onlyI suspect in most applications you won't know who to text, so you only want to fill the text body, not the number. That works as you'd expect by just leaving out the number - here's what the URLs look like in that case:sms:?body=messageFor iOS same thing except with the ;sms:;body=messageHere's an example of the code I use to set up the SMS:var ua = navigator.userAgent.toLowerCase(); var url; if (ua.indexOf("iphone") > -1 || ua.indexOf("ipad") > -1) url = "sms:;body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); else url = "sms:?body=" + encodeURIComponent("I'm at " + mapUrl + " @ " + pos.Address); location.href = url;and that also works for all the devices mentioned above.It's pretty cool that URL schemes exist to access device functionality and the SMS one will come in pretty handy for a number of things. Now if only all of the URI schemes were a bit more consistent (damn you Apple!) across devices...© Rick Strahl, West Wind Technologies, 2005-2013Posted in IOS  JavaScript  HTML5   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How to write simple code using TDD [migrated]

    - by adeel41
    Me and my colleagues do a small TDD-Kata practice everyday for 30 minutes. For reference this is the link for the excercise http://osherove.com/tdd-kata-1/ The objective is to write better code using TDD. This is my code which I've written public class Calculator { public int Add( string numbers ) { const string commaSeparator = ","; int result = 0; if ( !String.IsNullOrEmpty( numbers ) ) result = numbers.Contains( commaSeparator ) ? AddMultipleNumbers( GetNumbers( commaSeparator, numbers ) ) : ConvertToNumber( numbers ); return result; } private int AddMultipleNumbers( IEnumerable getNumbers ) { return getNumbers.Sum(); } private IEnumerable GetNumbers( string separator, string numbers ) { var allNumbers = numbers .Replace( "\n", separator ) .Split( new string[] { separator }, StringSplitOptions.RemoveEmptyEntries ); return allNumbers.Select( ConvertToNumber ); } private int ConvertToNumber( string number ) { return Convert.ToInt32( number ); } } and the tests for this class are [TestFixture] public class CalculatorTests { private int ArrangeAct( string numbers ) { var calculator = new Calculator(); return calculator.Add( numbers ); } [Test] public void Add_WhenEmptyString_Returns0() { Assert.AreEqual( 0, ArrangeAct( String.Empty ) ); } [Test] [Sequential] public void Add_When1Number_ReturnNumber( [Values( "1", "56" )] string number, [Values( 1, 56 )] int expected ) { Assert.AreEqual( expected, ArrangeAct( number ) ); } [Test] public void Add_When2Numbers_AddThem() { Assert.AreEqual( 3, ArrangeAct( "1,2" ) ); } [Test] public void Add_WhenMoreThan2Numbers_AddThemAll() { Assert.AreEqual( 6, ArrangeAct( "1,2,3" ) ); } [Test] public void Add_SeparatorIsNewLine_AddThem() { Assert.AreEqual( 6, ArrangeAct( @"1 2,3" ) ); } } Now I'll paste code which they have written public class StringCalculator { private const char Separator = ','; public int Add( string numbers ) { const int defaultValue = 0; if ( ShouldReturnDefaultValue( numbers ) ) return defaultValue; return ConvertNumbers( numbers ); } private int ConvertNumbers( string numbers ) { var numberParts = GetNumberParts( numbers ); return numberParts.Select( ConvertSingleNumber ).Sum(); } private string[] GetNumberParts( string numbers ) { return numbers.Split( Separator ); } private int ConvertSingleNumber( string numbers ) { return Convert.ToInt32( numbers ); } private bool ShouldReturnDefaultValue( string numbers ) { return String.IsNullOrEmpty( numbers ); } } and the tests [TestFixture] public class StringCalculatorTests { [Test] public void Add_EmptyString_Returns0() { ArrangeActAndAssert( String.Empty, 0 ); } [Test] [TestCase( "1", 1 )] [TestCase( "2", 2 )] public void Add_WithOneNumber_ReturnsThatNumber( string numberText, int expected ) { ArrangeActAndAssert( numberText, expected ); } [Test] [TestCase( "1,2", 3 )] [TestCase( "3,4", 7 )] public void Add_WithTwoNumbers_ReturnsSum( string numbers, int expected ) { ArrangeActAndAssert( numbers, expected ); } [Test] public void Add_WithThreeNumbers_ReturnsSum() { ArrangeActAndAssert( "1,2,3", 6 ); } private void ArrangeActAndAssert( string numbers, int expected ) { var calculator = new StringCalculator(); var result = calculator.Add( numbers ); Assert.AreEqual( expected, result ); } } Now the question is which one is better? My point here is that we do not need so many small methods initially because StringCalculator has no sub classes and secondly the code itself is so simple that we don't need to break it up too much that it gets confusing after having so many small methods. Their point is that code should read like english and also its better if they can break it up earlier than doing refactoring later and third when they will do refactoring it would be much easier to move these methods quite easily into separate classes. My point of view against is that we never made a decision that code is difficult to understand so why we are breaking it up so early. So I need a third person's opinion to understand which option is much better.

    Read the article

  • How to resize / enlarge / grow a non-LVM ext4 partition

    - by Mischa
    I have already searched the forums, but couldnt find a good suitable answer: I have an Ubuntu Server 10.04 as KVM Host and a guest system, that also runs 10.04. The host system uses LVM and there are three logical volumes, which are provided to the guest as virtual block devices - one for /, one for /home and one for swap. The guest had been partitioned without LVM. I have already enlarged the logical volume in the host system - the guest successfully sees the bigger virtual disk. However, this virtual disk contains one "good old" partition, which still has the old small size. The output of fdisk -l is me@produktion:/$ LC_ALL=en_US sudo fdisk -l Disk /dev/vda: 32.2 GB, 32212254720 bytes 255 heads, 63 sectors/track, 3916 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000c8ce7 Device Boot Start End Blocks Id System /dev/vda1 * 1 3917 31455232 83 Linux Disk /dev/vdb: 2147 MB, 2147483648 bytes 244 heads, 47 sectors/track, 365 cylinders Units = cylinders of 11468 * 512 = 5871616 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000f2bf7 Device Boot Start End Blocks Id System /dev/vdb1 1 366 2095104 82 Linux swap / Solaris Partition 1 has different physical/logical beginnings (non-Linux?): phys=(0, 32, 33) logical=(0, 43, 28) Partition 1 has different physical/logical endings: phys=(260, 243, 47) logical=(365, 136, 44) Disk /dev/vdc: 225.5 GB, 225485783040 bytes 255 heads, 63 sectors/track, 27413 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00027f25 Device Boot Start End Blocks Id System /dev/vdc1 1 9138 73398272 83 Linux The output of parted print all is Model: Virtio Block Device (virtblk) Disk /dev/vda: 32.2GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 32.2GB 32.2GB primary ext4 boot Model: Virtio Block Device (virtblk) Disk /dev/vdb: 2147MB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 2146MB 2145MB primary linux-swap(v1) Model: Virtio Block Device (virtblk) Disk /dev/vdc: 225GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 75.2GB 75.2GB primary ext4 What I want to achieve is to simply grow or resize the partition /dev/vdc1 so that it uses the whole space provided by the virtual block device /dev/vdc. The problem is, that when I try to do that with parted, it complains: (parted) select /dev/vdc Using /dev/vdc (parted) print Model: Virtio Block Device (virtblk) Disk /dev/vdc: 225GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 1049kB 75.2GB 75.2GB primary ext4 (parted) resize 1 WARNING: you are attempting to use parted to operate on (resize) a file system. parted's file system manipulation code is not as robust as what you'll find in dedicated, file-system-specific packages like e2fsprogs. We recommend you use parted only to manipulate partition tables, whenever possible. Support for performing most operations on most types of file systems will be removed in an upcoming release. Start? [1049kB]? End? [75.2GB]? 224GB Error: File system has an incompatible feature enabled. Compatible features are has_journal, dir_index, filetype, sparse_super and large_file. Use tune2fs or debugfs to remove features. So what can I do? This is a headless production system. What is a safe way to grow this partition? I CAN unmount it, though - so this is not the problem.

    Read the article

  • Expanding on requestaudit - Tracing who is doing what...and for how long

    - by Kyle Hatlestad
    One of the most helpful tracing sections in WebCenter Content (and one that is on by default) is the requestaudit tracing.  This tracing section summarizes the top service requests happening in the server along with how they are performing.  By default, it has 2 different rotations.  One happens every 2 minutes (listing up to 5 services) and another happens every 60 minutes (listing up to 20 services).  These traces provide the total time for all the requests against that service along with the number of requests and its average request time.  This information can provide a good start in possibly troubleshooting performance issues or tracking a particular issue.   >requestaudit/6 12.10 16:48:00.493 Audit Request Monitor !csMonitorTotalRequests,47,1,0.39009329676628113,0.21034042537212372,1>requestaudit/6 12.10 16:48:00.509 Audit Request Monitor Request Audit Report over the last 120 Seconds for server wcc-base_4444****requestaudit/6 12.10 16:48:00.509 Audit Request Monitor -Num Requests 47 Errors 1 Reqs/sec. 0.39009329676628113 Avg. Latency (secs) 0.21034042537212372 Max Thread Count 1requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 1 Service FLD_BROWSE Total Elapsed Time (secs) 3.5320000648498535 Num requests 10 Num errors 0 Avg. Latency (secs) 0.3531999886035919 requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 2 Service GET_SEARCH_RESULTS Total Elapsed Time (secs) 2.694999933242798 Num requests 6 Num errors 0 Avg. Latency (secs) 0.4491666555404663requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 3 Service GET_DOC_PAGE Total Elapsed Time (secs) 1.8839999437332153 Num requests 5 Num errors 1 Avg. Latency (secs) 0.376800000667572requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 4 Service DOC_INFO Total Elapsed Time (secs) 0.4620000123977661 Num requests 3 Num errors 0 Avg. Latency (secs) 0.15399999916553497requestaudit/6 12.10 16:48:00.509 Audit Request Monitor 5 Service GET_PERSONALIZED_JAVASCRIPT Total Elapsed Time (secs) 0.4099999964237213 Num requests 8 Num errors 0 Avg. Latency (secs) 0.051249999552965164requestaudit/6 12.10 16:48:00.509 Audit Request Monitor ****End Audit Report***** To change the default rotation or size of output, these can be set as configuration variables for the server: RequestAuditIntervalSeconds1 – Used for the shorter of the two summary intervals (default is 120 seconds)RequestAuditIntervalSeconds2 – Used for the longer of the two summary intervals (default is 3600 seconds)RequestAuditListDepth1 – Number of services listed for the first request audit summary interval (default is 5)RequestAuditListDepth2 – Number of services listed for the second request audit summary interval (default is 20) If you want to get more granular, you can enable 'Full Verbose Tracing' from the System Audit Information page and now you will get an audit entry for each and every service request.  >requestaudit/6 12.10 16:58:35.431 IdcServer-68 GET_USER_INFO [dUser=bob][StatusMessage=You are logged in as 'bob'.] 0.08765099942684174(secs) What's nice is it reports who executed the service and how long that particular request took.  In some cases, depending on the service, additional information will be added to the tracing relevant to that  service. >requestaudit/6 12.10 17:00:44.727 IdcServer-81 GET_SEARCH_RESULTS [dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Document%60+%29][StatusCode=0][StatusMessage=Success] 0.4696030020713806(secs) You can even go into more detail and insert any additional data into the tracing.  You simply need to add this configuration variable with a comma separated list of variables from local data to insert. RequestAuditAdditionalVerboseFieldsList=TotalRows,path In this case, for any search results, the number of items the user found is traced: >requestaudit/6 12.10 17:15:28.665 IdcServer-36 GET_SEARCH_RESULTS [TotalRows=224][dUser=bob][QueryText=%28+dDocType+%3cmatches%3e+%60Application%60+%29][Sta... I also recently ran into the case where services were being called from a client through RIDC.  All of the services were being executed as the same user, but they wanted to correlate the requests coming from the client to the ones being executed on the server.  So what we did was add a new field to the request audit list: RequestAuditAdditionalVerboseFieldsList=ClientToken And then in the RIDC client, ClientToken was added to the binder along with a unique value that could be traced for that request.  Now they had a way of tracing on both ends and identifying exactly which client request resulted in which request on the server.

    Read the article

  • SharePoint: Numeric/Integer Site Column (Field) Types

    - by CharlesLee
    What field type should you use when creating number based site columns as part of a SharePoint feature? Windows SharePoint Services 3.0 provides you with an extensible and flexible method of developing and deploying Site Columns and Content Types (both of which are required for most SharePoint projects requiring list or library based data storage) via the feature framework (more on this in my next full article.) However there is an interesting behaviour when working with a column or field which is required to hold a number, which I thought I would blog about today. When creating Site Columns in the browser you get a nice rich UI in order to choose the properties of this field: However when you are recreating this as a feature defined in CAML (Collaborative Application Mark-up Language), which is a type of XML (more on this in my article) then you do not get such a rich experience.  You would need to add something like this to the element manifest defined in your feature: <Field SourceID="http://schemas.microsoft.com/sharepoint/3.0"        ID="{C272E927-3748-48db-8FC0-6C7B72A6D220}"        Group="My Site Columns"        Name="MyNumber"        DisplayName="My Number"        Type="Numeric"        Commas="FALSE"        Decimals="0"        Required="FALSE"        ReadOnly="FALSE"        Sealed="FALSE"        Hidden="FALSE" /> OK, its not as nice as the browser UI but I can deal with this. Hang on. Commas="FALSE" and yet for my number 1234 I get 1,234.  That is not what I wanted or expected.  What gives? The answer lies in the difference between a type of "Numeric" which is an implementation of the SPFieldNumber class and "Integer" which does not correspond to a given SPField class but rather represents a positive or negative integer.  The numeric type does not respect the settings of Commas or NegativeFormat (which defines how to display negative numbers.)  So we can set the Type to Integer and we are good to go.  Yes? Sadly no! You will notice at this point that if you deploy your site column into SharePoint something has gone wrong.  Your site column is not listed in the Site Column Gallery.  The deployment must have failed then?  But no, a quick look at the site columns via the API reveals that the column is there.  What new evil is this?  Unfortunately the base type for integer fields has this lovely attribute set on it: UserCreatable = FALSE So WSS 3.0 accordingly hides your field in the gallery as you cannot create fields of this type. However! You can use them in content types just like any other field (except not in the browser UI), and if you add them to the content type as part of your feature then they will show up in the UI as a field on that content type.  Most of the time you are not going to be too concerned that your site columns are not listed in the gallery as you will know that they are there and that they are still useable. So not as bad as you thought after all.  Just a little quirky.  But that is SharePoint for you.

    Read the article

  • saving and retrieving a text file in java?

    - by user3319432
    import java.sql. ; import java.awt.; import javax.swing.; import java.awt.event.; public class saving extends JFrame implements ActionListener{ JTextField edpno=new JTextField(10); JLabel l0= new JLabel ("EDP Number: "); JComboBox fname = new JComboBox(); JLabel l1= new JLabel("First Name: "); JTextField lname= new JTextField(20); JLabel l2= new JLabel("Last Name: "); // JTextField contno= new JTextField(20); // JLabel l3= new JLabel("Contact Number: "); JComboBox contno = new JComboBox(); JLabel l3 = new JLabel ("Contact Number: "); JButton bOK = new JButton("Save"); JButton bRetrieve = new JButton("Retrieve"); private ImageIcon icon; JPanel C=new JPanel(){ protected void paintComponent(Graphics g){ g.drawImage(icon.getImage(),0,0,null); super.paintComponent(g); } }; public Search Record (){ icon=new ImageIcon("images/canres.png"); C.setOpaque(false); C.setLayout(new GridLayout(5,2,4,4)); setTitle("Search Record"); C.add (l0); C.add (edpno); edpno.addActionListener(this); C.add (l1); C.add (fname); fname.setForeground(Color.BLUE); fname.setFont(new Font(" ", Font.BOLD,15)); C.add (l2); C.add (lname); C.add (l3); C.add (contno); contno.setForeground(Color.BLUE); contno.setFont(new Font(" ", Font.BOLD,15)); C.add(bOK); bOK.addActionListener(this); C.add (bRetrieve); bRetrieve.addActionListener(this); bOK.setBackground(Color.white); bRetrieve.setBackground(Color.white); } public void saverecord(){ try{ //Connect to the Database Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path ="jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUserName =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); s.executeQuery("select firstname, Lastname, contact number from name WHERE edpno ='"+edpno.getText()+"'"); ResultSet rs = s.getResultSet(); ResultSetMetaData md = rs.getMetaData(); while(rs.next()) { fname.setSelectedItem(rs.getString(1)); lname.setText(rs.getString(2)); contno.setSelectedItem(rs.getString(3)); // crs.setSelectedItem(rs.getString(4)); } s.close(); con.close(); } catch(Exception Q) { JOptionPane.showMessageDialog(this,Q); } } public void SaveRecord(){ try{ Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path = "jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUsername =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); String sql = "UPDATE rooms SET Firstname='"+fname.getSelectedItem()+"',Lastname='"+lname.getText()+"',Contactnumber='"+contno.getSelectedItem()+"' WHERE '"+edpno.getText()+"'=edpno"; s.executeUpdate(sql); JOptionPane.showMessageDialog(this,"New room Record has been successfully saved"); dispose(); s.close(); con.close(); } catch(Exception Mismatch){ JOptionPane.showMessageDialog(this,Mismatch); } } public void actionPerformed (ActionEvent ako){ if (ako.getSource() == bRetrieve){ dispose(); } else if (ako.getSource() == bOK){ SaveRecord(); } } public static void main (String [] awtsave){ new Search(); } }

    Read the article

  • Musings on the launch of SQL Monitor

    - by Phil Factor
    For several years, I was responsible for the smooth running of a large number of enterprise database servers. We ran a network monitoring tool that was primitive by today’s standards but which performed the useful function of polling every system, including all the Servers in my charge. It ran a configurable script for each service that you needed to monitor that was merely required to return one of a number of integer values. These integer values represented the pain level of the service, from 10 (“hurtin’ real bad”) to 1 (“Things is great”). Not only could you program the visual appearance of each server on the network diagram according to the value of the integer, but you could even opt to run a sound file. Very soon, we had a large TFT Screen, high on the wall of the server room, with every server represented by an icon, and a speaker next to it that would give out a series of grunts, groans, snores, shrieks and funeral marches, depending on the problem. One glance at the display, and you could dive in with iSQL/QA/SSMS and check what was going on with your favourite diagnostic tools. If you saw a server icon burst into flames on the screen or droop like a jelly, you dropped your mug of coffee to do it.  It was real fun, but I remember it more for the huge difference it made to have that real-time visibility into how your servers are performing. The management soon stopped making jokes about the real reason we wanted the TFT screen. (It rendered DVDs beautifully they said; particularly flesh-tints). If you are instantly alerted when things start to go wrong, then there was a good chance you could fix it before being alerted to the problem by the users of the system.  There is a world of difference between this sort of tool, one that gives whoever is ‘on watch’ in the server room the first warning of a potential problem on one of any number of servers, and the breed of tool that attempts to provide some sort of prosthetic DBA Brain. I like to get the early warning, to get the right information to help to diagnose a problem: No auto-fix, but just the information. I prefer to leave the task of ascertaining the exact cause of a problem to my own routines, custom code, intuition and forensic instincts. A simulated aircraft cockpit doesn’t do anything for me, especially before I know where I should be flying.  Time has moved on, and that TFT screen is now, with SQL Monitor, an iPad or any other mobile or static device that can support a browser. Rather than trying to reproduce the conceptual topology of the servers, it lists them in their groups so as to give a display that scales with the increasing number of databases you monitor.  It gives the history of the major events and trends for the servers. It gives the icons and colours that you can spot out of the corner of your eye, but goes on to give you just enough information in drill-down to give you a much clearer idea of where to look with your DBA tools and routines. It doesn't swamp you with information.  Whereas a few server and database-level problems are pretty easily fixed, others depend on judgement and experience to sort out.  Although the idea of an application that automates the bulk of a DBA’s skills is attractive to many, I can’t see it happening soon. SQL Server’s complexity increases faster than the panaceas can be created. In the meantime, I believe that the best way of helping  DBAs  is to make the monitoring process as simple and effective as possible,  and provide the right sort of detail and ‘evidence’ to allow them to decide on the fix. In the end, it is still down to the skill of the DBA.

    Read the article

  • Investigating on xVelocity (VertiPaq) column size

    - by Marco Russo (SQLBI)
      In January I published an article about how to optimize high cardinality columns in VertiPaq. In the meantime, VertiPaq has been rebranded to xVelocity: the official name is now “xVelocity in-memory analytics engine (VertiPaq)” but using xVelocity and VertiPaq when we talk about Analysis Services has the same meaning. In this post I’ll show how to investigate on columns size of an existing Tabular database so that you can find the most important columns to be optimized. A first approach can be looking in the DataDir of Analysis Services and look for the folder containing the database. Then, look for the biggest files in all subfolders and you will find the name of a file that contains the name of the most expensive column. However, this heuristic process is not very optimized. A better approach is using a DMV that provides the exact information. For example, by using the following query (open SSMS, open an MDX query on the database you are interested to and execute it) you will see all database objects sorted by used size in a descending way. SELECT * FROM $SYSTEM.DISCOVER_STORAGE_TABLE_COLUMN_SEGMENTS ORDER BY used_size DESC You can look at the first rows in order to understand what are the most expensive columns in your tabular model. The interesting data provided are: TABLE_ID: it is the name of the object – it can be also a dictionary or an index COLUMN_ID: it is the column name the object belongs to – you can also see ID_TO_POS and POS_TO_ID in case they refer to internal indexes RECORDS_COUNT: it is the number of rows in the column USED_SIZE: it is the used memory for the object By looking at the ration between USED_SIZE and RECORDS_COUNT you can understand what you can do in order to optimize your tabular model. Your options are: Remove the column. Yes, if it contains data you will never use in a query, simply remove the column from the tabular model Change granularity. If you are tracking time and you included milliseconds but seconds would be enough, round the data source column to the nearest second. If you have a floating point number but two decimals are good enough (i.e. the temperature), round the number to the nearest decimal is relevant to you. Split the column. Create two or more columns that have to be combined together in order to produce the original value. This technique is described in VertiPaq optimization article. Sort the table by that column. When you read the data source, you might consider sorting data by this column, so that the compression will be more efficient. However, this technique works better on columns that don’t have too many distinct values and you will probably move the problem to another column. Sorting data starting from the lower density columns (those with a few number of distinct values) and going to higher density columns (those with high cardinality) is the technique that provides the best compression ratio. After the optimization you should be able to reduce the used size and improve the count/size ration you measured before. If you are interested in a longer discussion about internal storage in VertiPaq and you want understand why this approach can save you space (and time), you can attend my 24 Hours of PASS session “VertiPaq Under the Hood” on March 21 at 08:00 GMT.

    Read the article

< Previous Page | 250 251 252 253 254 255 256 257 258 259 260 261  | Next Page >