Search Results

Search found 13778 results on 552 pages for 'numeric format'.

Page 103/552 | < Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >

  • WMI Remote Process Starting

    - by Goober
    Scenario I've written a WMI Wrapper that seems to be quite sufficient, however whenever I run the code to start a remote process on a server, I see the process name appear in the task manager but the process itself does not start like it should (as in, I don't see the command line log window of the process that prints out what it's doing etc.) The process I am trying to start is just a C# application executable that I have written. Below is my WMI Wrapper Code and the code I am using to start running the process. Question Is the process actually running? - Even if it is only displaying the process name in the task manager and not actually launching the application to the users window? Code To Start The Process IPHostEntry hostEntry = Dns.GetHostEntry("InsertServerName"); WMIWrapper wrapper = new WMIWrapper("Insert User Name", "Insert Password", hostEntry.HostName); List<Process> processes = wrapper.GetProcesses(); foreach (Process process in processes) { if (process.Caption.Equals("MyAppName.exe")) { Console.WriteLine(process.Caption); Console.WriteLine(process.CommandLine); int processId; wrapper.StartProcess("E:\\MyData\\Data\\MyAppName.exe", out processId); Console.WriteLine(processId.ToString()); } } Console.ReadLine(); WMI Wrapper Code using System; using System.Collections.Generic; using System.Management; using System.Runtime.InteropServices; using Common.WMI.Objects; using System.Net; namespace Common.WMIWrapper { public class WMIWrapper : IDisposable { #region Constructor /// <summary> /// Creates a new instance of the wrapper /// </summary> /// <param jobName="username"></param> /// <param jobName="password"></param> /// <param jobName="server"></param> public WMIWrapper(string server) { Initialise(server); } /// <summary> /// Creates a new instance of the wrapper /// </summary> /// <param jobName="username"></param> /// <param jobName="password"></param> /// <param jobName="server"></param> public WMIWrapper(string username, string password, string server) { Initialise(username, password, server); } #endregion #region Destructor /// <summary> /// Clean up unmanaged references /// </summary> ~WMIWrapper() { Dispose(false); } #endregion #region Initialise /// <summary> /// Initialise the WMI Connection (local machine) /// </summary> /// <param name="server"></param> private void Initialise(string server) { m_server = server; // set connection options m_connectOptions = new ConnectionOptions(); IPHostEntry host = Dns.GetHostEntry(Environment.MachineName); } /// <summary> /// Initialise the WMI connection /// </summary> /// <param jobName="username">Username to connect to server with</param> /// <param jobName="password">Password to connect to server with</param> /// <param jobName="server">Server to connect to</param> private void Initialise(string username, string password, string server) { m_server = server; // set connection options m_connectOptions = new ConnectionOptions(); IPHostEntry host = Dns.GetHostEntry(Environment.MachineName); if (host.HostName.Equals(server, StringComparison.OrdinalIgnoreCase)) return; m_connectOptions.Username = username; m_connectOptions.Password = password; m_connectOptions.Impersonation = ImpersonationLevel.Impersonate; m_connectOptions.EnablePrivileges = true; } #endregion /// <summary> /// Return a list of available wmi namespaces /// </summary> /// <returns></returns> public List<String> GetWMINamespaces() { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root", this.Server), this.ConnectionOptions); List<String> wmiNamespaceList = new List<String>(); ManagementClass wmiNamespaces = new ManagementClass(wmiScope, new ManagementPath("__namespace"), null); ; foreach (ManagementObject ns in wmiNamespaces.GetInstances()) wmiNamespaceList.Add(ns["Name"].ToString()); return wmiNamespaceList; } /// <summary> /// Return a list of available classes in a namespace /// </summary> /// <param jobName="wmiNameSpace">Namespace to get wmi classes for</param> /// <returns>List of classes in the requested namespace</returns> public List<String> GetWMIClassList(string wmiNameSpace) { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, wmiNameSpace), this.ConnectionOptions); List<String> wmiClasses = new List<String>(); ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery("SELECT * FROM meta_Class"), null); foreach (ManagementClass wmiClass in wmiSearcher.Get()) wmiClasses.Add(wmiClass["__CLASS"].ToString()); return wmiClasses; } /// <summary> /// Get a list of wmi properties for the specified class /// </summary> /// <param jobName="wmiNameSpace">WMI Namespace</param> /// <param jobName="wmiClass">WMI Class</param> /// <returns>List of properties for the class</returns> public List<String> GetWMIClassPropertyList(string wmiNameSpace, string wmiClass) { List<String> wmiClassProperties = new List<string>(); ManagementClass managementClass = GetWMIClass(wmiNameSpace, wmiClass); foreach (PropertyData property in managementClass.Properties) wmiClassProperties.Add(property.Name); return wmiClassProperties; } /// <summary> /// Returns a list of methods for the class /// </summary> /// <param jobName="wmiNameSpace"></param> /// <param jobName="wmiClass"></param> /// <returns></returns> public List<String> GetWMIClassMethodList(string wmiNameSpace, string wmiClass) { List<String> wmiClassMethods = new List<string>(); ManagementClass managementClass = GetWMIClass(wmiNameSpace, wmiClass); foreach (MethodData method in managementClass.Methods) wmiClassMethods.Add(method.Name); return wmiClassMethods; } /// <summary> /// Retrieve the specified management class /// </summary> /// <param jobName="wmiNameSpace">Namespace of the class</param> /// <param jobName="wmiClass">Type of the class</param> /// <returns></returns> public ManagementClass GetWMIClass(string wmiNameSpace, string wmiClass) { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, wmiNameSpace), this.ConnectionOptions); ManagementClass managementClass = null; ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery(String.Format("SELECT * FROM meta_Class WHERE __CLASS = '{0}'", wmiClass)), null); foreach (ManagementClass wmiObject in wmiSearcher.Get()) managementClass = wmiObject; return managementClass; } /// <summary> /// Get an instance of the specficied class /// </summary> /// <param jobName="wmiNameSpace">Namespace of the classes</param> /// <param jobName="wmiClass">Type of the classes</param> /// <returns>Array of management classes</returns> public ManagementObject[] GetWMIClassObjects(string wmiNameSpace, string wmiClass) { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, wmiNameSpace), this.ConnectionOptions); List<ManagementObject> wmiClasses = new List<ManagementObject>(); ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery(String.Format("SELECT * FROM {0}", wmiClass)), null); foreach (ManagementObject wmiObject in wmiSearcher.Get()) wmiClasses.Add(wmiObject); return wmiClasses.ToArray(); } /// <summary> /// Get a full list of services /// </summary> /// <returns></returns> public List<Service> GetServices() { return GetService(null); } /// <summary> /// Get a list of services /// </summary> /// <returns></returns> public List<Service> GetService(string name) { ManagementObject[] services = GetWMIClassObjects("CIMV2", "WIN32_Service"); List<Service> serviceList = new List<Service>(); for (int i = 0; i < services.Length; i++) { ManagementObject managementObject = services[i]; Service service = new Service(managementObject); service.Status = (string)managementObject["Status"]; service.Name = (string)managementObject["Name"]; service.DisplayName = (string)managementObject["DisplayName"]; service.PathName = (string)managementObject["PathName"]; service.ProcessId = (uint)managementObject["ProcessId"]; service.Started = (bool)managementObject["Started"]; service.StartMode = (string)managementObject["StartMode"]; service.ServiceType = (string)managementObject["ServiceType"]; service.InstallDate = (string)managementObject["InstallDate"]; service.Description = (string)managementObject["Description"]; service.Caption = (string)managementObject["Caption"]; if (String.IsNullOrEmpty(name) || name.Equals(service.Name, StringComparison.OrdinalIgnoreCase)) serviceList.Add(service); } return serviceList; } /// <summary> /// Get a list of processes /// </summary> /// <returns></returns> public List<Process> GetProcesses() { return GetProcess(null); } /// <summary> /// Get a list of processes /// </summary> /// <returns></returns> public List<Process> GetProcess(uint? processId) { ManagementObject[] processes = GetWMIClassObjects("CIMV2", "WIN32_Process"); List<Process> processList = new List<Process>(); for (int i = 0; i < processes.Length; i++) { ManagementObject managementObject = processes[i]; Process process = new Process(managementObject); process.Priority = (uint)managementObject["Priority"]; process.ProcessId = (uint)managementObject["ProcessId"]; process.Status = (string)managementObject["Status"]; DateTime createDate; if (ConvertFromWmiDate((string)managementObject["CreationDate"], out createDate)) process.CreationDate = createDate.ToString("dd-MMM-yyyy HH:mm:ss"); process.Caption = (string)managementObject["Caption"]; process.CommandLine = (string)managementObject["CommandLine"]; process.Description = (string)managementObject["Description"]; process.ExecutablePath = (string)managementObject["ExecutablePath"]; process.ExecutionState = (string)managementObject["ExecutionState"]; process.MaximumWorkingSetSize = (UInt32?)managementObject ["MaximumWorkingSetSize"]; process.MinimumWorkingSetSize = (UInt32?)managementObject["MinimumWorkingSetSize"]; process.KernelModeTime = (UInt64)managementObject["KernelModeTime"]; process.ThreadCount = (UInt32)managementObject["ThreadCount"]; process.UserModeTime = (UInt64)managementObject["UserModeTime"]; process.VirtualSize = (UInt64)managementObject["VirtualSize"]; process.WorkingSetSize = (UInt64)managementObject["WorkingSetSize"]; if (processId == null || process.ProcessId == processId.Value) processList.Add(process); } return processList; } /// <summary> /// Start the specified process /// </summary> /// <param jobName="commandLine"></param> /// <returns></returns> public bool StartProcess(string command, out int processId) { processId = int.MaxValue; ManagementClass processClass = GetWMIClass("CIMV2", "WIN32_Process"); object[] objectsIn = new object[4]; objectsIn[0] = command; processClass.InvokeMethod("Create", objectsIn); if (objectsIn[3] == null) return false; processId = int.Parse(objectsIn[3].ToString()); return true; } /// <summary> /// Schedule a process on the remote machine /// </summary> /// <param name="command"></param> /// <param name="scheduleTime"></param> /// <param name="jobName"></param> /// <returns></returns> public bool ScheduleProcess(string command, DateTime scheduleTime, out string jobName) { jobName = String.Empty; ManagementClass scheduleClass = GetWMIClass("CIMV2", "Win32_ScheduledJob"); object[] objectsIn = new object[7]; objectsIn[0] = command; objectsIn[1] = String.Format("********{0:00}{1:00}{2:00}.000000+060", scheduleTime.Hour, scheduleTime.Minute, scheduleTime.Second); objectsIn[5] = true; scheduleClass.InvokeMethod("Create", objectsIn); if (objectsIn[6] == null) return false; UInt32 scheduleid = (uint)objectsIn[6]; jobName = scheduleid.ToString(); return true; } /// <summary> /// Returns the current time on the remote server /// </summary> /// <returns></returns> public DateTime Now() { ManagementScope wmiScope = new ManagementScope(String.Format("\\\\{0}\\root\\{1}", this.Server, "CIMV2"), this.ConnectionOptions); ManagementClass managementClass = null; ManagementObjectSearcher wmiSearcher = new ManagementObjectSearcher(wmiScope, new WqlObjectQuery(String.Format("SELECT * FROM Win32_LocalTime")), null); DateTime localTime = DateTime.MinValue; foreach (ManagementObject time in wmiSearcher.Get()) { UInt32 day = (UInt32)time["Day"]; UInt32 month = (UInt32)time["Month"]; UInt32 year = (UInt32)time["Year"]; UInt32 hour = (UInt32)time["Hour"]; UInt32 minute = (UInt32)time["Minute"]; UInt32 second = (UInt32)time["Second"]; localTime = new DateTime((int)year, (int)month, (int)day, (int)hour, (int)minute, (int)second); }; return localTime; } /// <summary> /// Converts a wmi date into a proper date /// </summary> /// <param jobName="wmiDate">Wmi formatted date</param> /// <returns>Date time object</returns> private static bool ConvertFromWmiDate(string wmiDate, out DateTime properDate) { properDate = DateTime.MinValue; string properDateString; // check if string is populated if (String.IsNullOrEmpty(wmiDate)) return false; wmiDate = wmiDate.Trim().ToLower().Replace("*", "0"); string[] months = new string[] { "Jan", "Feb", "Mar", "Apr", "May", "Jun", "Jul", "Aug", "Sep", "Oct", "Nov", "Dec" }; try { properDateString = String.Format("{0}-{1}-{2} {3}:{4}:{5}.{6}", wmiDate.Substring(6, 2), months[int.Parse(wmiDate.Substring(4, 2)) - 1], wmiDate.Substring(0, 4), wmiDate.Substring(8, 2), wmiDate.Substring(10, 2), wmiDate.Substring(12, 2), wmiDate.Substring(15, 6)); } catch (InvalidCastException) { return false; } catch (ArgumentOutOfRangeException) { return false; } // try and parse the new date if (!DateTime.TryParse(properDateString, out properDate)) return false; // true if conversion successful return true; } private bool m_disposed; #region IDisposable Members /// <summary> /// Managed dispose /// </summary> public void Dispose() { Dispose(true); GC.SuppressFinalize(this); } /// <summary> /// Dispose of managed and unmanaged objects /// </summary> /// <param jobName="disposing"></param> public void Dispose(bool disposing) { if (disposing) { m_connectOptions = null; } } #endregion #region Properties private ConnectionOptions m_connectOptions; /// <summary> /// Gets or sets the management scope /// </summary> private ConnectionOptions ConnectionOptions { get { return m_connectOptions; } set { m_connectOptions = value; } } private String m_server; /// <summary> /// Gets or sets the server to connect to /// </summary> public String Server { get { return m_server; } set { m_server = value; } } #endregion } }

    Read the article

  • PowerShell Import DnsShell Module

    - by krousemw
    So here's the list of available modules in this directory. As you can see DnsShell is there. PS C:\windows\system32> Get-Module -ListAvailable Directory: C:\windows\system32\WindowsPowerShell\v1.0\Modules ModuleType Name ExportedCommands ---------- ---- ---------------- Manifest ActiveDirectory {Get-ADRootDSE, New-ADObject, Rename- ADObject, Move-ADObject...} Manifest AppLocker {Set-AppLockerPolicy, Get-AppLockerPolicy, Test-AppLockerPolicy, Get-AppLo... Manifest BitsTransfer {Add-BitsFile, Remove-BitsTransfer, Complete-BitsTransfer, Get-BitsTransfe... Manifest CimCmdlets {Get-CimAssociatedInstance, Get-CimClass, Get-CimInstance, Get-CimSession...} Binary DnsShell Script ISE {New-IseSnippet, Import-IseSnippet, Get- IseSnippet} Manifest Microsoft.PowerShell.Diagnostics {Get-WinEvent, Get-Counter, Import-Counter, Export-Counter...} Manifest Microsoft.PowerShell.Host {Start-Transcript, Stop-Transcript} Manifest Microsoft.PowerShell.Management {Add-Content, Clear-Content, Clear- ItemProperty, Join-Path...} Manifest Microsoft.PowerShell.Security {Get-Acl, Set-Acl, Get-PfxCertificate, Get-Credential...} Manifest Microsoft.PowerShell.Utility {Format-List, Format-Custom, Format-Table, Format-Wide...} Manifest Microsoft.WSMan.Management {Disable-WSManCredSSP, Enable- WSManCredSSP, Get-WSManCredSSP, Set-WSManQui... Script PSDiagnostics {Disable-PSTrace, Disable- PSWSManCombinedTrace, Disable-WSManTrace, Enable... Binary PSScheduledJob {New-JobTrigger, Add-JobTrigger, Remove-JobTrigger, Get-JobTrigger...} Manifest PSWorkflow {New-PSWorkflowExecutionOption, New-PSWorkflowSession, nwsn} Manifest PSWorkflowUtility Invoke-AsWorkflow Manifest TroubleshootingPack {Get-TroubleshootingPack, Invoke-TroubleshootingPack} When I run the command to Import-Module DnsShell, I get this error and I dont know why.. PS C:\windows\system32> Import-Module DnsShell Import-Module : Could not load file or assembly 'file:///C:\windows\system32\WindowsPowerShell\v1.0\Modules\DnsShell\DnsShell.dll' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515) At line:1 char:1 + Import-Module DnsShell + ~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : NotSpecified: (:) [Import-Module], FileLoadException + FullyQualifiedErrorId : System.IO.FileLoadException,Microsoft.PowerShell.Commands.ImportModuleCommand Note: I would have posted pictures but I needed a rep of at least 10 in serverfault

    Read the article

  • Formatting a truecrypt volume on windows: Truecrypt failed to obtain administrator privileges

    - by daveh551
    Following shaneselman's advice (http://www.hanselman.com/blog/TheComputerBackupRuleOfThree.aspx), I'm setting up a 2TB external drive as a backup, and encrypting it using TrueCrypt. This is under Windows 7. (I've used TrueCrypt before, but not since XP days.) I installed TrueCrypt with default options. After taking 23 hours to format the 2TB container, it finishes with an error saying "TrueCrypt cannot obtain administrator privileges." When I click OK on that, it says it can't format the container as NTFS, do I want to format it as FAT, and I don't so I click No, and it exits with nothing to show for 23 hours of compute time. So I set the "Run as administrator flag" on the TrueCrypt shortcut, and try again. This time, when I go to format the contain (prepared to wait ANOTHER 23 hours), it pops up a dialog saying you're formatting this as administrator. "The volume may be created with permissions that will not allow you to write to the volume when it is mounted. If you want to avoid that, close this instance of Volume Creation Wizard and launch a new one without administrator privileges." Great - I can either run as non-administrator, and it will fail to create the volume at all, or I can run as administrator and create a volume I can't write to. I think I'm doing something wrong, but it's not clear to me what. Any help?

    Read the article

  • How to rescue from an SD (SDHC) card that I can't reformat (possible hardware failure)

    - by sbwoodside
    I have a transcend 16GB SDHC card and a lot of photos on it that I'd like to recover. When I plug it into the SD card reader, it takes a while for the Mac to even recognize that there's a disk present, and it shows up as 1.07GB with geometry 520/64/63 (according to fdisk). First I tried file recovery: PhotoRec: no files are found (the images are in CR2 format and I'm using testdisk-6.14-WIP which claims to recognize that format under TIF) dd / ddrescue: they create a 1.07GB image, same problem as above TestDisk: doesn't find any partitions to recover I found a source saying that the correct geometry for this type of SD Card is Heads 255, Sectors/Track 63, Cylinders 1953, so I tried manually setting that geometry in PhotoRec/TestDisk. No improvement. Next I tried formatting the disk with fdisk. After writing and quitting, I ran fdisk again and it reported that the new format hadn't been saved on the disk. I also tried resetting the format/partitions with TestDisk and that failed also. The fdisk log is below. I don't really care about the card, I've already ordered a new SanDisk card. But I'd like to get the data off. Maybe, is there any way to force dd or some other tool to create an image of the disk based on the original geometry and not on what the card "thinks" its geometry is? Or am I missing something?

    Read the article

  • Change the Default Date setting in Word 2010

    - by Chris
    I am using Word 2010 and Windows 7. You know how when you start typing a date in Word it will automatically suggest what it thinks you want? Like if I start typing “6/29”, a little grey bubble will display “6/29/13 (Press ENTER to Insert)”. How do I get it so the bubble will display the year in a 4 digit format, such as "6/29/2013 (Press Enter to Insert)"? The below picture is how it looks when typing a date into Word. I have already gone to the Date & Time option under the Insert menu and the date format that I want is already selected. I think this is only for using quickparts anyway, so the date automatically updates when you open a document. The Region and Language settings under the Control Panel are correct as well. I thought at one point I found it somewhere under options, but I am sure I looked through everything many times and I can’t find it. I posted this exact question at the Microsoft website and someone replied: Go to the Windows Control Panel and click on Clock, Language and Region and then on Change the date, time, or number format and then modify the Short date format so that it is what you want to be used. So please don't suggest this again, because in my question I did say that I already tried this and it doesn't work, at least not for Word, in this situation. Thanks.

    Read the article

  • How to stop RAID5 array while it is shown to be busy?

    - by RCola
    I have a raid5 array and need to stop it, but while trying to stop it getting error. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] unused devices: <none> # mdadm --stop mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: No devices given. # mdadm --stop /dev/md0 mdadm: metadata format 00.90 unknown, ignored. mdadm: metadata format 00.90 unknown, ignored. mdadm: fail to stop array /dev/md0: Device or resource busy and # lsof | grep md0 md0_raid5 965 root cwd DIR 8,1 4096 2 / md0_raid5 965 root rtd DIR 8,1 4096 2 / md0_raid5 965 root txt unknown /proc/965/exe # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/2] [_UU] # grep md0 /proc/mdstat md0 : active raid5 sde1[3](F) sdc1[4](F) sdf1[2] sdd1[1] # grep md0 /proc/partitions 9 0 2120320 md0 While booting, md1 is mounted ok but md0 failed for some unknown reason # dmesg | grep md[0-9] [ 4.399658] raid5: allocated 3179kB for md1 [ 4.400432] raid5: raid level 5 set md1 active with 3 out of 3 devices, algorithm 2 [ 4.400678] md1: detected capacity change from 0 to 2121793536 [ 4.403135] md1: unknown partition table [ 38.937932] Filesystem "md1": Disabling barriers, trial barrier write failed [ 38.941969] XFS mounting filesystem md1 [ 41.058808] Ending clean XFS mount for filesystem: md1 [ 46.325684] raid5: allocated 3179kB for md0 [ 46.327103] raid5: raid level 5 set md0 active with 2 out of 3 devices, algorithm 2 [ 46.330620] md0: detected capacity change from 0 to 2171207680 [ 46.335598] md0: unknown partition table [ 46.410195] md: recovery of RAID array md0 [ 117.970104] md: md0: recovery done. # cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sde1[0] sdf1[2] sdd1[1] 2120320 blocks level 5, 32k chunk, algorithm 2 [3/3] [UUU] md1 : active raid5 sdc2[0] sdf2[2] sde2[3](S) sdd2[1] 2072064 blocks level 5, 128k chunk, algorithm 2 [3/3] [UUU]

    Read the article

  • Firefox not displaying icons in KhanAcademy

    - by ADTC
    If you don't know what Khan Academy is, check it out. It's awesome. (For testing purpose you may view any video on the website.) My problem -- it's a minor problem, but annoying -- is that in Firefox (Windows 7), the icons below the video are shown as boxes with hex codes in them. This means the icons come from some font that isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I checked out the source and found that the font in question is called "FontAwesome". I found this question in S.O. that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. Also, would manually downloading the font and adding it to Windows' Fonts folder help? I tried this with the TTF font, and it does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • Firefox cannot render icons from Font Awesome webfont set

    - by ADTC
    In Firefox (Windows 7), icons and glyphs that are called from the Font Awesome package do not render properly. An example of this can be seen on the Khan Academy website. Below the video the icons are shown as boxes with hex codes in them. This means that it isn't getting downloaded by Firefox. How it appears on Chrome (Windows 7), Safari (Mac OS X) and Stainless (Mac OS X): I found this question on Stack Overflow that may explain why this happens -- the CSS does use single quotes to enclose the font's src location. However, I don't have any write access to Khan Academy servers so I can't modify the actual website. I want to know if this can be fixed in Firefox, and how. I can run Greasemonkey scripts if that would help. I've already tried manually downloading the font and adding it to Windows' Fonts folder but this does not help. For reference, the CSS that sets this font up (not processed properly by Firefox) is: @font-face { font-family:'FontAwesome'; src:url('./fontawesome-webfont.eot'); src:url('./fontawesome-webfont.eot?#iefix') format('embedded-opentype'), url('./fontawesome-webfont.woff') format('woff'), url('./fontawesome-webfont.ttf') format('truetype'), url('./fontawesome-webfont.svg#FontAwesome') format('svg'); font-weight:normal; font-style:normal } [class^="icon-"]:before, [class*=" icon-"]:before { font-family:FontAwesome; font-weight:normal; font-style:normal; display:inline-block; text-decoration:inherit }

    Read the article

  • No partition on USB Flash Drive?

    - by Skytunnel
    A friend gave me a corrupted USB memory stick to try recovery data from. But I've had some unusual results, so thought I'd share to see if anyone is familiar with this problem... First off I just tried opening from my own PC. Windows prompted to Format the drive, which I of course declined Downloaded TestDisk to anaylsis the drive. And right away I noticed something strange, on the listed drives it comes up as Disk /dev/sdc - 6144 B - USB Flash Drive That's right, the first USB flash drive smaller than a floppy disk!? Moving on anyway... first anaylsis comes up with: Partition sector doesn't have the endmark 0xAA55 TestDisk's Quick Search gave no results, moved on to Deeper Search: No partition found or selected for recovery This left me stumped. I tired a couple of other programs with no success I did manage to get a backup image, but it was just as small as TestDisk indicated, so nothing of use on it After a few hours trying various suggestions from other sources, I gave in and just tried formatting the drive. But returned the message: Windows was unable to complete the format. From googling that, the suggestion was to delete the partition. But there is no partition to delete in this case. most recently I've tried formatting from cmd, and got this result: Format D: /FS:FAT32 The type of the file system is RAW The new file system is FAT32 Verifying 0M 11 bad sectors were encountered during the format. These sectors cannot be guaranteed to have been cleaned The volume is too small for FAT32 Anyone got any suggestions? UPDATE: As per suggestion from @Karen, I tried running a CLEAN from DISKPART, results as follows DiskPart has encountered an error: The request could not be preformed because of an I/O device error.

    Read the article

  • converting apache rewrite rules to nginx for xenforo

    - by nick
    Hi all, I am migrating some forums from vbulletin 3.8.x to xenforo, and trying to keep my old link structure alive. Basically, XF provides some php files that I can redirect the old url style to and it handles the proper 301 redirection. Regardless of that end, I am having difficulty rewriting the rules which I can only find defined in apache's rewrite style: RewriteRule [\d]+-[^/]+/.+-([\d]+)/([\d]+)/ showthread.php?t=$1&page=$2 [NC,L] RewriteRule [\d]+-[^/]+/.+-([\d]+)/ showthread.php?t=$1 [NC,L] RewriteRule ([\d]+)-[^/]+/([\d]+)/ forumdisplay.php?f=$1&page=$2 [NC,L] RewriteRule ([\d]+)-[^/]+/ forumdisplay.php?f=$1 [NC,L] I have been experimenting and thought this should work, but obviously not: if (!-e $request_filename) { rewrite [0-9a-zA-Z\-]/[0-9a-zA-Z\-]-([0-9])/([0-9])/ /showthread.php?t=$1&page=$2 last; rewrite [0-9a-zA-Z\-]/[0-9a-zA-Z\-]-([0-9])/ /showthread.php?t=$1 last; rewrite ([0-9])-[0-9a-zA-Z\-]/([0-9])/ /forumdisplay.php?f=$1&page=$2 last; rewrite ([0-9])-[0-9a-zA-Z\-]/ /forumdisplay.php?f=$1 last; rewrite ^(.*)$ /index.php last; } old vB showthread format: website.tld/233-website-issues-requests/wiki-down-73789/ new XF showthread format: website.tld/threads/the-wiki-is-down.65509/ old vB forumdisplay format: website.tld/233-website-issues-requests/ new XF forumdisplay format: website.tld/forums/website-issues-and-requests.253/

    Read the article

  • Problem with Informix JDBC, MONEY and decimal separator in string literals

    - by Michal Niklas
    I have problem with JDBC application that uses MONEY data type. When I insert into MONEY column: insert into _money_test (amt) values ('123.45') I got exception: Character to numeric conversion error The same SQL works from native Windows application using ODBC driver. I live in Poland and have Polish locale and in my country comma separates decimal part of number, so I tried: insert into _money_test (amt) values ('123,45') And it worked. I checked that in PreparedStatement I must use dot separator: 123.45. And of course I can use: insert into _money_test (amt) values (123.45) But some code is "general", it imports data from csv file and it was safe to put number into string literal. How to force JDBC to use DBMONEY (or simply dot) in literals? My workstation is WinXP. I have ODBC and JDBC Informix client in version 3.50 TC5/JC5. I have set DBMONEY to just dot: DBMONEY=. EDIT: Test code in Jython: import sys import traceback from java.sql import DriverManager from java.lang import Class Class.forName("com.informix.jdbc.IfxDriver") QUERY = "insert into _money_test (amt) values ('123.45')" def test_money(driver, db_url, usr, passwd): try: print("\n\n%s\n--------------" % (driver)) db = DriverManager.getConnection(db_url, usr, passwd) c = db.createStatement() c.execute("delete from _money_test") c.execute(QUERY) rs = c.executeQuery("select amt from _money_test") while (rs.next()): print('[%s]' % (rs.getString(1))) rs.close() c.close() db.close() except: print("there were errors!") s = traceback.format_exc() sys.stderr.write("%s\n" % (s)) print(QUERY) test_money("com.informix.jdbc.IfxDriver", 'jdbc:informix-sqli://169.0.1.225:9088/test:informixserver=ol_225;DB_LOCALE=pl_PL.CP1250;CLIENT_LOCALE=pl_PL.CP1250;charSet=CP1250', 'informix', 'passwd') test_money("sun.jdbc.odbc.JdbcOdbcDriver", 'jdbc:odbc:test', 'informix', 'passwd') Results when I run money literal with dot and comma: C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123,45') com.informix.jdbc.IfxDriver -------------- [123.45] sun.jdbc.odbc.JdbcOdbcDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: [Informix][Informix ODBC Driver][Informix]Character to numeric conversion error C:\db_examples>jython ifx_jdbc_money.py insert into _money_test (amt) values ('123.45') com.informix.jdbc.IfxDriver -------------- there were errors! Traceback (most recent call last): File "ifx_jdbc_money.py", line 16, in test_money c.execute(QUERY) SQLException: java.sql.SQLException: Character to numeric conversion error sun.jdbc.odbc.JdbcOdbcDriver -------------- [123.45]

    Read the article

  • Implementing Naïve Bayes algorithm in Java - Need some guidance

    - by techventure
    hello stackflow people As a School assignment i'm required to implement Naïve Bayes algorithm which i am intending to do in Java. In trying to understand how its done, i've read the book "Data Mining - Practical Machine Learning Tools and Techniques" which has a section on this topic but am still unsure on some primary points that are blocking my progress. Since i'm seeking guidance not solution in here, i'll tell you guys what i thinking in my head, what i think is the correct approach and in return ask for correction/guidance which will very much be appreciated. please note that i am an absolute beginner on Naïve Bayes algorithm, Data mining and in general programming so you might see stupid comments/calculations below: The training data set i'm given has 4 attributes/features that are numeric and normalized(in range[0 1]) using Weka (no missing values)and one nominal class(yes/no) 1) The data coming from a csv file is numeric HENCE * Given the attributes are numeric i use PDF (probability density function) formula. + To calculate the PDF in java i first separate the attributes based on whether they're in class yes or class no and hold them into different array (array class yes and array class no) + Then calculate the mean(sum of the values in row / number of values in that row) and standard divination for each of the 4 attributes (columns) of each class + Now to find PDF of a given value(n) i do (n-mean)^2/(2*SD^2), + Then to find P( yes | E) and P( no | E) i multiply the PDF value of all 4 given attributes and compare which is larger, which indicates the class it belongs to In temrs of Java, i'm using ArrayList of ArrayList and Double to store the attribute values. lastly i'm unsure how to to get new data? Should i ask for input file (like csv) or command prompt and ask for 4 values? I'll stop here for now (do have more questions) but I'm worried this won't get any responses given how long its got. I will really appreciate for those that give their time reading my problems and comment.

    Read the article

  • how to export bind and keyframe bone poses from blender to use in OpenGL

    - by SaldaVonSchwartz
    EDIT: I decided to reformulate the question in much simpler terms to see if someone can give me a hand with this. Basically, I'm exporting meshes, skeletons and actions from blender into an engine of sorts that I'm working on. But I'm getting the animations wrong. I can tell the basic motion paths are being followed but there's always an axis of translation or rotation which is wrong. I think the problem is most likely not in my engine code (OpenGL-based) but rather in either my misunderstanding of some part of the theory behind skeletal animation / skinning or the way I am exporting the appropriate joint matrices from blender in my exporter script. I'll explain the theory, the engine animation system and my blender export script, hoping someone might catch the error in either or all of these. The theory: (I'm using column-major ordering since that's what I use in the engine cause it's OpenGL-based) Assume I have a mesh made up of a single vertex v, along with a transformation matrix M which takes the vertex v from the mesh's local space to world space. That is, if I was to render the mesh without a skeleton, the final position would be gl_Position = ProjectionMatrix * M * v. Now assume I have a skeleton with a single joint j in bind / rest pose. j is actually another matrix. A transform from j's local space to its parent space which I'll denote Bj. if j was part of a joint hierarchy in the skeleton, Bj would take from j space to j-1 space (that is to its parent space). However, in this example j is the only joint, so Bj takes from j space to world space, like M does for v. Now further assume I have a a set of frames, each with a second transform Cj, which works the same as Bj only that for a different, arbitrary spatial configuration of join j. Cj still takes vertices from j space to world space but j is rotated and/or translated and/or scaled. Given the above, in order to skin vertex v at keyframe n. I need to: take v from world space to joint j space modify j (while v stays fixed in j space and is thus taken along in the transformation) take v back from the modified j space to world space So the mathematical implementation of the above would be: v' = Cj * Bj^-1 * v. Actually, I have one doubt here.. I said the mesh to which v belongs has a transform M which takes from model space to world space. And I've also read in a couple textbooks that it needs to be transformed from model space to joint space. But I also said in 1 that v needs to be transformed from world to joint space. So basically I'm not sure if I need to do v' = Cj * Bj^-1 * v or v' = Cj * Bj^-1 * M * v. Right now my implementation multiples v' by M and not v. But I've tried changing this and it just screws things up in a different way cause there's something else wrong. Finally, If we wanted to skin a vertex to a joint j1 which in turn is a child of a joint j0, Bj1 would be Bj0 * Bj1 and Cj1 would be Cj0 * Cj1. But Since skinning is defined as v' = Cj * Bj^-1 * v , Bj1^-1 would be the reverse concatenation of the inverses making up the original product. That is, v' = Cj0 * Cj1 * Bj1^-1 * Bj0^-1 * v Now on to the implementation (Blender side): Assume the following mesh made up of 1 cube, whose vertices are bound to a single joint in a single-joint skeleton: Assume also there's a 60-frame, 3-keyframe animation at 60 fps. The animation essentially is: keyframe 0: the joint is in bind / rest pose (the way you see it in the image). keyframe 30: the joint translates up (+z in blender) some amount and at the same time rotates pi/4 rad clockwise. keyframe 59: the joint goes back to the same configuration it was in keyframe 0. My first source of confusion on the blender side is its coordinate system (as opposed to OpenGL's default) and the different matrices accessible through the python api. Right now, this is what my export script does about translating blender's coordinate system to OpenGL's standard system: # World transform: Blender -> OpenGL worldTransform = Matrix().Identity(4) worldTransform *= Matrix.Scale(-1, 4, (0,0,1)) worldTransform *= Matrix.Rotation(radians(90), 4, "X") # Mesh (local) transform matrix file.write('Mesh Transform:\n') localTransform = mesh.matrix_local.copy() localTransform = worldTransform * localTransform for col in localTransform.col: file.write('{:9f} {:9f} {:9f} {:9f}\n'.format(col[0], col[1], col[2], col[3])) file.write('\n') So if you will, my "world" matrix is basically the act of changing blenders coordinate system to the default GL one with +y up, +x right and -z into the viewing volume. Then I also premultiply (in the sense that it's done by the time we reach the engine, not in the sense of post or pre in terms of matrix multiplication order) the mesh matrix M so that I don't need to multiply it again once per draw call in the engine. About the possible matrices to extract from Blender joints (bones in Blender parlance), I'm doing the following: For joint bind poses: def DFSJointTraversal(file, skeleton, jointList): for joint in jointList: bindPoseJoint = skeleton.data.bones[joint.name] bindPoseTransform = bindPoseJoint.matrix_local.inverted() file.write('Joint ' + joint.name + ' Transform {\n') translationV = bindPoseTransform.to_translation() rotationQ = bindPoseTransform.to_3x3().to_quaternion() scaleV = bindPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) DFSJointTraversal(file, skeleton, joint.children) file.write('}\n') Note that I'm actually grabbing the inverse of what I think is the bind pose transform Bj. This is so I don't need to invert it in the engine. Also note I went for matrix_local, assuming this is Bj. The other option is plain "matrix", which as far as I can tell is the same only that not homogeneous. For joint current / keyframe poses: for kfIndex in keyframes: bpy.context.scene.frame_set(kfIndex) file.write('keyframe: {:d}\n'.format(int(kfIndex))) for i in range(0, len(skeleton.data.bones)): file.write('joint: {:d}\n'.format(i)) currentPoseJoint = skeleton.pose.bones[i] currentPoseTransform = currentPoseJoint.matrix translationV = currentPoseTransform.to_translation() rotationQ = currentPoseTransform.to_3x3().to_quaternion() scaleV = currentPoseTransform.to_scale() file.write('T {:9f} {:9f} {:9f}\n'.format(translationV[0], translationV[1], translationV[2])) file.write('Q {:9f} {:9f} {:9f} {:9f}\n'.format(rotationQ[1], rotationQ[2], rotationQ[3], rotationQ[0])) file.write('S {:9f} {:9f} {:9f}\n'.format(scaleV[0], scaleV[1], scaleV[2])) file.write('\n') Note that here I go for skeleton.pose.bones instead of data.bones and that I have a choice of 3 matrices: matrix, matrix_basis and matrix_channel. From the descriptions in the python API docs I'm not super clear which one I should choose, though I think it's the plain matrix. Also note I do not invert the matrix in this case. The implementation (Engine / OpenGL side): My animation subsystem does the following on each update (I'm omitting parts of the update loop where it's figured out which objects need update and time is hardcoded here for simplicity): static double time = 0; time = fmod((time + elapsedTime),1.); uint16_t LERPKeyframeNumber = 60 * time; uint16_t lkeyframeNumber = 0; uint16_t lkeyframeIndex = 0; uint16_t rkeyframeNumber = 0; uint16_t rkeyframeIndex = 0; for (int i = 0; i < aClip.keyframesCount; i++) { uint16_t keyframeNumber = aClip.keyframes[i].number; if (keyframeNumber <= LERPKeyframeNumber) { lkeyframeIndex = i; lkeyframeNumber = keyframeNumber; } else { rkeyframeIndex = i; rkeyframeNumber = keyframeNumber; break; } } double lTime = lkeyframeNumber / 60.; double rTime = rkeyframeNumber / 60.; double blendFactor = (time - lTime) / (rTime - lTime); GLKMatrix4 bindPosePalette[aSkeleton.jointsCount]; GLKMatrix4 currentPosePalette[aSkeleton.jointsCount]; for (int i = 0; i < aSkeleton.jointsCount; i++) { F3DETQSType& lPose = aClip.keyframes[lkeyframeIndex].skeletonPose.joints[i]; F3DETQSType& rPose = aClip.keyframes[rkeyframeIndex].skeletonPose.joints[i]; GLKVector3 LERPTranslation = GLKVector3Lerp(lPose.t, rPose.t, blendFactor); GLKQuaternion SLERPRotation = GLKQuaternionSlerp(lPose.q, rPose.q, blendFactor); GLKVector3 LERPScaling = GLKVector3Lerp(lPose.s, rPose.s, blendFactor); GLKMatrix4 currentTransform = GLKMatrix4MakeWithQuaternion(SLERPRotation); currentTransform = GLKMatrix4TranslateWithVector3(currentTransform, LERPTranslation); currentTransform = GLKMatrix4ScaleWithVector3(currentTransform, LERPScaling); GLKMatrix4 inverseBindTransform = GLKMatrix4MakeWithQuaternion(aSkeleton.joints[i].inverseBindTransform.q); inverseBindTransform = GLKMatrix4TranslateWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.t); inverseBindTransform = GLKMatrix4ScaleWithVector3(inverseBindTransform, aSkeleton.joints[i].inverseBindTransform.s); if (aSkeleton.joints[i].parentIndex == -1) { bindPosePalette[i] = inverseBindTransform; currentPosePalette[i] = currentTransform; } else { bindPosePalette[i] = GLKMatrix4Multiply(inverseBindTransform, bindPosePalette[aSkeleton.joints[i].parentIndex]); currentPosePalette[i] = GLKMatrix4Multiply(currentPosePalette[aSkeleton.joints[i].parentIndex], currentTransform); } aSkeleton.skinningPalette[i] = GLKMatrix4Multiply(currentPosePalette[i], bindPosePalette[i]); } Finally, this is my vertex shader: #version 100 uniform mat4 modelMatrix; uniform mat3 normalMatrix; uniform mat4 projectionMatrix; uniform mat4 skinningPalette[6]; uniform lowp float skinningEnabled; attribute vec4 position; attribute vec3 normal; attribute vec2 tCoordinates; attribute vec4 jointsWeights; attribute vec4 jointsIndices; varying highp vec2 tCoordinatesVarying; varying highp float lIntensity; void main() { tCoordinatesVarying = tCoordinates; vec4 skinnedVertexPosition = vec4(0.); for (int i = 0; i < 4; i++) { skinnedVertexPosition += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * position; } vec4 skinnedNormal = vec4(0.); for (int i = 0; i < 4; i++) { skinnedNormal += jointsWeights[i] * skinningPalette[int(jointsIndices[i])] * vec4(normal, 0.); } vec4 finalPosition = mix(position, skinnedVertexPosition, skinningEnabled); vec4 finalNormal = mix(vec4(normal, 0.), skinnedNormal, skinningEnabled); vec3 eyeNormal = normalize(normalMatrix * finalNormal.xyz); vec3 lightPosition = vec3(0., 0., 2.); lIntensity = max(0.0, dot(eyeNormal, normalize(lightPosition))); gl_Position = projectionMatrix * modelMatrix * finalPosition; } The result is that the animation displays wrong in terms of orientation. That is, instead of bobbing up and down it bobs in and out (along what I think is the Z axis according to my transform in the export clip). And the rotation angle is counterclockwise instead of clockwise. If I try with a more than one joint, then it's almost as if the second joint rotates in it's own different coordinate space and does not follow 100% its parent's transform. Which I assume it should from my animation subsystem which I assume in turn follows the theory I explained for the case of more than one joint. Any thoughts?

    Read the article

  • How to create a column containing a string of stars to inidcate levels of a factor in a data frame i

    - by PaulHurleyuk
    (second question today - must be a bad day) I have a dataframe with various columns, inculding a concentration column (numeric), a flag highlighting invalid results (boolean) and a description of the problem (character) dput(df) structure(list(x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), rawconc = c(77.4, 52.6, 86.5, 44.5, 167, 16.2, 59.3, 123, 1.95, 181), reason = structure(c(NA, NA, 2L, NA, NA, NA, 2L, 1L, NA, NA), .Label = c("Fails Acceptance Criteria", "Poor Injection"), class = "factor"), flag = c("False", "False", "True", "False", "False", "False", "True", "True", "False", "False" )), .Names = c("x", "rawconc", "reason", "flag"), row.names = c(NA, -10L), class = "data.frame") I can create a column with the numeric level of the reason column df$level<-as.numeric(df$reason) df x rawconc reason flag level 1 1 77.40 <NA> False NA 2 2 52.60 <NA> False NA 3 3 86.50 Poor Injection True 2 4 4 44.50 <NA> False NA 5 5 167.00 <NA> False NA 6 6 16.20 <NA> False NA 7 7 59.30 Poor Injection True 2 8 8 123.00 Fails Acceptance Criteria True 1 9 9 1.95 <NA> False NA 10 10 181.00 <NA> False NA and here's what I want to do to create a column with 'level' many stars, but it fails df$stars<-paste(rep("*",df$level)sep="",collapse="") Error: unexpected symbol in "df$stars<-paste(rep("*",df$level)sep" df$stars<-paste(rep("*",df$level),sep="",collapse="") Error in rep("*", df$level) : invalid 'times' argument rep("*",df$level) Error in rep("*", df$level) : invalid 'times' argument df$stars<-paste(rep("*",pmax(df$level,0,na.rm=TRUE)),sep="",collapse="") Error in rep("*", pmax(df$level, 0, na.rm = TRUE)) : invalid 'times' argument It seems that rep needs to be fed one value at a time. I feel that this should be possible (and my gut says 'use lapply' but my apply fu is v. poor) ANy one want to try ?

    Read the article

  • Random String Generator creates same string on multiple calls

    - by rockinthesixstring
    Hi there. I've build a random string generator but I'm having a problem whereby if I call the function multiple times say in a Page_Load method, the function returns the same string twice. here's the code ''' <summary>' ''' Generates a Random String' ''' </summary>' ''' <param name="n">number of characters the method should generate</param>' ''' <param name="UseSpecial">should the method include special characters? IE: # ,$, !, etc.</param>' ''' <param name="SpecialOnly">should the method include only the special characters and excludes alpha numeric</param>' ''' <returns>a random string n characters long</returns>' Public Function GenerateRandom(ByVal n As Integer, Optional ByVal UseSpecial As Boolean = True, Optional ByVal SpecialOnly As Boolean = False) As String Dim chars As String() ' a character array to use when generating a random string' Dim ichars As Integer = 74 'number of characters to use out of the chars string' Dim schars As Integer = 0 ' number of characters to skip out of the characters string' chars = { _ "A", "B", "C", "D", "E", "F", _ "G", "H", "I", "J", "K", "L", _ "M", "N", "O", "P", "Q", "R", _ "S", "T", "U", "V", "W", "X", _ "Y", "Z", "0", "1", "2", "3", _ "4", "5", "6", "7", "8", "9", _ "a", "b", "c", "d", "e", "f", _ "g", "h", "i", "j", "k", "l", _ "m", "n", "o", "p", "q", "r", _ "s", "t", "u", "v", "w", "x", _ "y", "z", "!", "@", "#", "$", _ "%", "^", "&", "*", "(", ")", _ "-", "+"} If Not UseSpecial Then ichars = 62 ' only use the alpha numeric characters out of "char"' If SpecialOnly Then schars = 62 : ichars = 74 ' skip the alpha numeric characters out of "char"' Dim rnd As New Random() Dim random As String = String.Empty Dim i As Integer = 0 While i < n random += chars(rnd.[Next](schars, ichars)) System.Math.Max(System.Threading.Interlocked.Increment(i), i - 1) End While rnd = Nothing Return random End Function but if I call something like this Dim str1 As String = GenerateRandom(5) Dim str2 As String = GenerateRandom(5) the response will be something like this g*3Jq g*3Jq and the second time I call it, it will be 3QM0$ 3QM0$ What am I missing? I'd like every random string to be generated as unique.

    Read the article

  • postgres - ERROR: operator does not exist

    - by cino21122
    Again, I have a function that works fine locally, but moving it online yields a big fat error... Taking a cue from a response in which someone had pointed out the number of arguments I was passing wasn't accurate, I double-checked in this situation to be certain that I am passing 5 arguments to the function itself... Query failed: ERROR: operator does not exist: point <@> point HINT: No operator matches the given name and argument type(s). You may need to add explicit type casts. The query is this: BEGIN; SELECT zip_proximity_sum('zc', (SELECT g.lat FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT g.lon FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1), (SELECT m.zip FROM geocoded g LEFT JOIN masterfile m ON g.recordid = m.id WHERE m.zip = '10050' ORDER BY m.id LIMIT 1) ,10); The PG function is this: CREATE OR REPLACE FUNCTION zip_proximity_sum(refcursor, numeric, numeric, character, numeric) RETURNS refcursor AS $BODY$ BEGIN OPEN $1 FOR SELECT r.zip, point($2,$3) <@> point(g.lat, g.lon) AS distance FROM geocoded g LEFT JOIN masterfile r ON g.recordid = r.id WHERE (geo_distance( point($2,$3),point(g.lat,g.lon)) < $5) ORDER BY r.zip, distance; RETURN $1; END; $BODY$ LANGUAGE 'plpgsql' VOLATILE COST 100;

    Read the article

  • how to implement a sparse_vector class

    - by Neil G
    I am implementing a templated sparse_vector class. It's like a vector, but it only stores elements that are different from their default constructed value. So, sparse_vector would store the index-value pairs for all indices whose value is not T(). I am basing my implementation on existing sparse vectors in numeric libraries-- though mine will handle non-numeric types T as well. I looked at boost::numeric::ublas::coordinate_vector and eigen::SparseVector. Both store: size_t* indices_; // a dynamic array T* values_; // a dynamic array int size_; int capacity_; Why don't they simply use vector<pair<size_t, T>> data_; My main question is what are the pros and cons of both systems, and which is ultimately better? The vector of pairs manages size_ and capacity_ for you, and simplifies the accompanying iterator classes; it also has one memory block instead of two, so it incurs half the reallocations, and might have better locality of reference. The other solution might search more quickly since the cache lines fill up with only index data during a search. There might also be some alignment advantages if T is an 8-byte type? It seems to me that vector of pairs is the better solution, yet both containers chose the other solution. Why?

    Read the article

  • authentication question (security code generation logic)

    - by Stick it to THE MAN
    I have a security number generator device, small enough to go on a key-ring, which has a six digit LCD display and a button. After I have entered my account name and password on an online form, I press the button on the security device and enter the security code number which is displayed. I get a different number every time I press the button and the number generator has a serial number on the back which I had to input during the account set-up procedure. I would like to incorporate similar functionality in my website. As far as I understand, these are the main components: Generate a unique N digit aplha-numeric sequence during registration and assign to user (permanently) Allow user to generate an N (or M?) digit aplha-numeric sequence remotely For now, I dont care about the hardware side, I am only interested in knowing how I may choose a suitable algorithm that will allow the user to generate an N (or M?) long aplha-numeric sequence - presumably, using his unique ID as a seed Identify the user from the number generated in step 2 (which decryption method is the most robust to do this?) I have the following questions: Have I identified all the steps required in such an authentication system?, if not please point out what I have missed and why it is important What are the most robust encryption/decryption algorithms I can use for steps 1 through 3 (preferably using 64bits)?

    Read the article

  • Problem convert column values from VARCHAR(n) to DECIMAL

    - by Kevin Babcock
    I have a SQL Server 2000 database with a column of type VARCHAR(255). All the data is either NULL, or numeric data with up to two points of precision (e.g. '11.85'). I tried to run the following T-SQL query but received the error 'Error converting data type varchar to numeric' SELECT CAST([MyColumn] AS DECIMAL) FROM [MyTable]; I tried a more specific cast, which also failed. SELECT CAST([MyColumn] AS DECIMAL(6,2)) FROM [MyTable]; I also tried the following to see if any data is non-numeric, and the only values returned were NULL. SELECT ISNUMERIC([MyColumn]), [MyColumn] FROM [MyTable] WHERE ISNUMERIC([MyColumn]) = 0; I tried to convert to other data types, such as FLOAT and MONEY, but only MONEY was successful. So I tried the following: SELECT CAST(CAST([MyColumn] AS MONEY) AS DECIMAL) FROM [MyTable]; ...which worked just fine. Any ideas why the original query failed? Will there be a problem if I first convert to MONEY and then to DECIMAL? Thanks!

    Read the article

  • Insert or Update using Oracle and PL/SQL

    - by Shane
    I have a PL/SQL function that performs an update/insert on an Oracle database that maintains a target total and returns the difference between the existing value and the new value. Here is the code I have so far: FUNCTION calcTargetTotal(accountId varchar2, newTotal numeric ) RETURN number is oldTotal numeric(20,6); difference numeric(20,6); begin difference := 0; begin select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; exception when NO_DATA_FOUND then begin difference := newTotal; insert into target_total ( account_id, value ) values ( accountId, newTotal ); -- sometimes a race condition occurs and this stmt fails -- in those cases try to update again exception when DUP_VAL_ON_INDEX then begin difference := 0; select value into oldTotal from target_total WHERE account_id = accountId for update of value; if (oldTotal != newTotal) then update target_total set value = newTotal WHERE account_id = accountId difference := newTotal - oldTotal; end if; end; end; end; return difference end calcTargetTotal; This works as expected in unit tests with multiple threads never failing. However when loaded on a live system we have seen this fail with a stack trace looking like this: ORA-01403: no data found ORA-00001: unique constraint () violated ORA-01403: no data found The line numbers (which I have removed since they are meaningless out of context) verify that the first update fails due to no data, the insert fail due to uniqueness, and the 2nd update is failing with no data, which should be impossible. From what I have read on other thread a MERGE statement is also not atomic and could suffer similar problems. Does anyone have any ideas how to prevent this from occurring?

    Read the article

  • decoding jquery json data in php

    - by Mac Taylor
    hey guys recenlty i made a script to move objects and save the orders now i have a little to do to finish this script everything works fine , just one question : how can i save not numeric data in my json script this is my script : function updateWidgetData(){ var items=[]; $('.widget-title').each(function(){ var weightId=$(this).attr('id'); $('.column').each(function(){ var columnId=$(this).attr('id'); $('.widget', this).each(function(i){ var collapsed=0; if($(this).find('.widget-inside').css('display')=="none") collapsed=1; //Create Item object for current panel var item={ id: $(this).attr('id'), collapsed: collapsed, order : i, column: columnId, weight: weightId }; //Push item object into items array items.push(item); }); }); }); //Assign items array to sortorder JSON variable var sortorder={ items: items }; //Pass sortorder variable to server using ajax to save state $.post('updatePanels.php', 'data='+$.toJSON(sortorder), function(response){ if(response=="success") $("#console").html('<div class="success">Saved</div>').hide().fadeIn(1000); setTimeout(function(){ $('#console').fadeOut(1000); }, 2000); }); } and this is my php script : $data=json_decode($_POST["data"]); foreach($data->items as $item) { //Extract column number for panel $col_id=preg_replace('/[^\d\s]/', '', $item->column); //Extract id of the panel $widget_id=preg_replace('/[^\d\s]/', '', $item->id); $sql="UPDATE widgets SET column_id='$col_id', sort_no='".$item->order."', collapsed='".$item->collapsed."' WHERE id='".$widget_id."'"; mysql_query($sql) or die('Error updating widget DB'); } echo "success"; everything works fine till i use numeric value for columns' id but i need non numeric values forexample id='columnr' i want to extract r but i cant get it right any help plz !?

    Read the article

  • Big Data – What is Big Data – 3 Vs of Big Data – Volume, Velocity and Variety – Day 2 of 21

    - by Pinal Dave
    Data is forever. Think about it – it is indeed true. Are you using any application as it is which was built 10 years ago? Are you using any piece of hardware which was built 10 years ago? The answer is most certainly No. However, if I ask you – are you using any data which were captured 50 years ago, the answer is most certainly Yes. For example, look at the history of our nation. I am from India and we have documented history which goes back as over 1000s of year. Well, just look at our birthday data – atleast we are using it till today. Data never gets old and it is going to stay there forever.  Application which interprets and analysis data got changed but the data remained in its purest format in most cases. As organizations have grown the data associated with them also grew exponentially and today there are lots of complexity to their data. Most of the big organizations have data in multiple applications and in different formats. The data is also spread out so much that it is hard to categorize with a single algorithm or logic. The mobile revolution which we are experimenting right now has completely changed how we capture the data and build intelligent systems.  Big organizations are indeed facing challenges to keep all the data on a platform which give them a  single consistent view of their data. This unique challenge to make sense of all the data coming in from different sources and deriving the useful actionable information out of is the revolution Big Data world is facing. Defining Big Data The 3Vs that define Big Data are Variety, Velocity and Volume. Volume We currently see the exponential growth in the data storage as the data is now more than text data. We can find data in the format of videos, musics and large images on our social media channels. It is very common to have Terabytes and Petabytes of the storage system for enterprises. As the database grows the applications and architecture built to support the data needs to be reevaluated quite often. Sometimes the same data is re-evaluated with multiple angles and even though the original data is the same the new found intelligence creates explosion of the data. The big volume indeed represents Big Data. Velocity The data growth and social media explosion have changed how we look at the data. There was a time when we used to believe that data of yesterday is recent. The matter of the fact newspapers is still following that logic. However, news channels and radios have changed how fast we receive the news. Today, people reply on social media to update them with the latest happening. On social media sometimes a few seconds old messages (a tweet, status updates etc.) is not something interests users. They often discard old messages and pay attention to recent updates. The data movement is now almost real time and the update window has reduced to fractions of the seconds. This high velocity data represent Big Data. Variety Data can be stored in multiple format. For example database, excel, csv, access or for the matter of the fact, it can be stored in a simple text file. Sometimes the data is not even in the traditional format as we assume, it may be in the form of video, SMS, pdf or something we might have not thought about it. It is the need of the organization to arrange it and make it meaningful. It will be easy to do so if we have data in the same format, however it is not the case most of the time. The real world have data in many different formats and that is the challenge we need to overcome with the Big Data. This variety of the data represent  represent Big Data. Big Data in Simple Words Big Data is not just about lots of data, it is actually a concept providing an opportunity to find new insight into your existing data as well guidelines to capture and analysis your future data. It makes any business more agile and robust so it can adapt and overcome business challenges. Tomorrow In tomorrow’s blog post we will try to answer discuss Evolution of Big Data. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • TFS API-Process Template currently applied to the Team Project

    - by Tarun Arora
    Download Demo Solution - here In this blog post I’ll show you how to use the TFS API to get the name of the Process Template that is currently applied to the Team Project. You can also download the demo solution attached, I’ve tested this solution against TFS 2010 and TFS 2011.    1. Connecting to TFS Programmatically I have a blog post that shows you from where to download the VS 2010 SP1 SDK and how to connect to TFS programmatically. private TfsTeamProjectCollection _tfs; private string _selectedTeamProject;   TeamProjectPicker tfsPP = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPP.ShowDialog(); this._tfs = tfsPP.SelectedTeamProjectCollection; this._selectedTeamProject = tfsPP.SelectedProjects[0].Name; 2. Programmatically get the Process Template details of the selected Team Project I’ll be making use of the VersionControlServer service to get the Team Project details and the ICommonStructureService to get the Project Properties. private ProjectProperty[] GetProcessTemplateDetailsForTheSelectedProject() { var vcs = _tfs.GetService<VersionControlServer>(); var ics = _tfs.GetService<ICommonStructureService>(); ProjectProperty[] ProjectProperties = null; var p = vcs.GetTeamProject(_selectedTeamProject); string ProjectName = string.Empty; string ProjectState = String.Empty; int templateId = 0; ProjectProperties = null; ics.GetProjectProperties(p.ArtifactUri.AbsoluteUri, out ProjectName, out ProjectState, out templateId, out ProjectProperties); return ProjectProperties; } 3. What’s the catch? The ProjectProperties will contain a property “Process Template” which as a value has the name of the process template. So, you will be able to use the below line of code to get the name of the process template. var processTemplateName = processTemplateDetails.Where(pt => pt.Name == "Process Template").Select(pt => pt.Value).FirstOrDefault();   However, if the process template does not contain the property “Process Template” then you will need to add it. So, the question becomes how do i add the Name property to the Process Template. Download the Process Template from the Process Template Manager on your local        Once you have downloaded the Process Template to your local machine, navigate to the Classification folder with in the template       From the classification folder open Classification.xml        Add a new property <property name=”Process Template” value=”MSF for CMMI Process Improvement v5.0” />           4. Putting it all together… using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.VersionControl.Client; using Microsoft.TeamFoundation.Server; using System.Diagnostics; using Microsoft.TeamFoundation.WorkItemTracking.Client; namespace TfsAPIDemoProcessTemplate { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private TfsTeamProjectCollection _tfs; private string _selectedTeamProject; private void btnConnect_Click(object sender, EventArgs e) { TeamProjectPicker tfsPP = new TeamProjectPicker(TeamProjectPickerMode.SingleProject, false); tfsPP.ShowDialog(); this._tfs = tfsPP.SelectedTeamProjectCollection; this._selectedTeamProject = tfsPP.SelectedProjects[0].Name; var processTemplateDetails = GetProcessTemplateDetailsForTheSelectedProject(); listBox1.Items.Clear(); listBox1.Items.Add(String.Format("Team Project Selected => '{0}'", _selectedTeamProject)); listBox1.Items.Add(Environment.NewLine); var processTemplateName = processTemplateDetails.Where(pt => pt.Name == "Process Template") .Select(pt => pt.Value).FirstOrDefault(); if (!string.IsNullOrEmpty(processTemplateName)) { listBox1.Items.Add(Environment.NewLine); listBox1.Items.Add(String.Format("Process Template Name: {0}", processTemplateName)); } else { listBox1.Items.Add(String.Format("The Process Template does not have the 'Name' property set up")); listBox1.Items.Add(String.Format("***TIP: Download the Process Template and in Classification.xml add a new property Name, update the template then you will be able to see the Process Template Name***")); listBox1.Items.Add(String.Format(" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -")); } } private ProjectProperty[] GetProcessTemplateDetailsForTheSelectedProject() { var vcs = _tfs.GetService<VersionControlServer>(); var ics = _tfs.GetService<ICommonStructureService>(); ProjectProperty[] ProjectProperties = null; var p = vcs.GetTeamProject(_selectedTeamProject); string ProjectName = string.Empty; string ProjectState = String.Empty; int templateId = 0; ProjectProperties = null; ics.GetProjectProperties(p.ArtifactUri.AbsoluteUri, out ProjectName, out ProjectState, out templateId, out ProjectProperties); return ProjectProperties; } } } Thank you for taking the time out and reading this blog post. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Have you come across a better way of doing this, please share your experience here. Questions/Feedback/Suggestions, etc please leave a comment. Thank You! Share this post : CodeProject

    Read the article

  • iPack -The iOS Application Packager

    - by user13277780
    iOS applications are distributed in .ipa archive files. These files are regular zip files which contain application resources and executable-s. To protect them from unauthorized modifications and to provide identification of their sources, the content of the archives is signed. The signature is included in the application executable of an.ipa archive and protects the executable file itself and the associated resource files. Apple provides native Mac OS tools for signing iOS executable-s (which are actually generic Mach-O code signing tools), but these tools are not generally available on other platforms. To provide a multi-platform development environment for JavaFX based iOS applications, we ported iOS signing and packaging to Java and created a dedicated ipack tool for it. The iPack tool can be used as a last step of creating .ipa package on various operating systems. Prototype has been tested by creating a final distributable for JavaFX application that runs on iPad, all done on Windows 7. Source Code The source code of iPac tool is in OpenJFX project repository. You can find it in: <openjfx root>/rt/tools/ios/Maven/ipack To build the iPack tool use: rt/tools/ios/Maven/ipack$ mvn package After building, you can run the tool: java -jar <path to ipack.jar> <arguments>  Signing keystore The tool uses a java key store to read the signing certificate and the associated private key. To prepare such keystore users can use keytool from JDK. One possible scenario is to import an existing private key and the certificate from a key store used on Mac OS: To list the content of an existing key store and identify the source alias: keytool -list -keystore <src keystore>.p12 -storetype pkcs12 -storepass <src keystore password> To create Java key store and import the private key with its certificate to the keys store: keytool -importkeystore \ -destkeystore <dst keystore> -deststorepass <dst keystore password> \ -srckeystore <src keystore>.p12 -srcstorepass <src keystore password> -srcstoretype pkcs12 \ -srcalias <src alias> -destalias <dst alias> -destkeypass <dst key password> Another scenario would be to generate a private / public key pair directly in a Java key store and create a certificate request from it. After sending the request to Apple one can then import the certificate response back to the Java key store and complete the signing certificate entry. In both scenarios the resulting alias in the Java key store will contain only a single (leaf) certificate. This can be verified with the following command: keytool -list -v -keystore <ipack keystore> -storepass <keystore password> When looking at the Certificate chain length entry, the number next to it is 1. When an executable file is signed on Mac OS, the resulting signature (in CMS format) includes the whole certificate chain up to the Apple Root CA. The ipack tool includes only the chain which is stored under the alias specified on the command line. So to have the whole chain in the signature we need to replace the single certificate entry under the alias with the corresponding full certificate chain. To do that we need first to create the chain in a separate file. It is easy to create such chain when working with certificates in Base-64 encoded PEM format. A certificate chain can be created by concatenating PEM certificates, which should form the chain, into a single file. For iOS signing we need the following certificates in our chain: Apple Root CA Apple Worldwide Developer Relations CA Our signing leaf certificate To convert a certificate from the binary DER format (.der, .cer) to PEM format: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert -file <certificate>.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert -rfc -file <certificate>.pem To export the signing certificate into PEM format: keytool -exportcert -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -rfc -file SigningCert.pem After constructing a chain from AppleIncRootCertificate.pem, AppleWWDRCA.pem andSigningCert.pem, it can be imported back into the keystore with: keytool -importcert -noprompt -keystore <ipack keystore> -storepass <keystore password> -alias <signing alias> -keypass <key password> -file SigningCertChain.pem To summarize, the following example shows the full certificate chain replacement process: keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert1 -file AppleIncRootCertificate.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert1 -rfc -file AppleIncRootCertificate.pem keytool -importcert -noprompt -keystore temp.ks -storepass temppwd -alias tempcert2 -file AppleWWDRCA.cer keytool -exportcert -keystore temp.ks -storepass temppwd -alias tempcert2 -rfc -file AppleWWDRCA.pem keytool -exportcert -keystore ipack.ks -storepass keystorepwd -alias mycert -rfc -file SigningCert.pem cat SigningCert.pem AppleWWDRCA.pem AppleIncRootCertificate.pem >SigningCertChain.pem keytool -importcert -noprompt -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -file SigningCertChain.pem keytool -list -v -keystore ipack.ks -storepass keystorepwd Usage When the ipack tool is started with no arguments it prints the following usage information: -appname MyApplication -appid com.myorg.MyApplication     Usage: ipack <archive> <signing opts> <application opts> [ <application opts> ... ] Signing options: -keystore <keystore> keystore to use for signing -storepass <password> keystore password -alias <alias> alias for the signing certificate chain and the associated private key -keypass <password> password for the private key Application options: -basedir <directory> base directory from which to derive relative paths -appdir <directory> directory with the application executable and resources -appname <file> name of the application executable -appid <id> application identifier Example: ipack MyApplication.ipa -keystore ipack.ks -storepass keystorepwd -alias mycert -keypass keypwd -basedir mysources/MyApplication/dist -appdir Payload/MyApplication.app -appname MyApplication -appid com.myorg.MyApplication    

    Read the article

  • Mac OS X Server Configure DHCP Options 66 and 67

    - by Paul Adams
    I need to configure Mountain Lion (10.8.2) OS X Server BOOTP to provide DHCP options 66 and 67 to provide PXE booting for PCs on my network. I have tried following the bootpd MAN pages, but they are not specific enough. I have also read conflicting information on the net, but nothing definitive for Mountain Lion DHCP. From bootpd man page: bootpd has a built-in type conversion table for many more options, mostly those specified in RFC 2132, and will try to convert from whatever type the option appears in the property list to the binary, packet format. For example, if bootpd knows that the type of the option is an IP address or list of IP addresses, it converts from the string form of the IP address to the binary, network byte order numeric value. If the type of the option is a numeric value, it converts from string, integer, or boolean, to the proper sized, network byte-order numeric value. Regardless of whether bootpd knows the type of the option or not, you can always specify the DHCP option using the data property list type <key>dhcp_option_128</key> <data> AAqV1Tzo </data> My TFTP server is 172.16.152.20 and the bootfile is pxelinux.0 I have edited /etc/bootpd.plist and added the following to the subnet dict: <key>dhcp_option_66</key> <data> LW4gLWUgrBCYFAo= </data> <key>dhcp_option_67</key> <data> LW4gLWUgcHhlbGludXguMAo= </data> According to the man page, the data elements are supposed to be Base64 encoded, but no matter what I try, I cannot get PXE clients to boot. I have tried encoding 172.16.152.20 using various methods: echo "172.16.152.20" | openssl enc -base64 returns MTcyLjE2LjE1Mi4yMAo= DHCP Option Code Utility (http://mac.softpedia.com/get/Internet-Utilities/DHCP-Option-Code-Utility.shtml) generating a string from 172.16.152.20 yields: LW4gLWUgMTcyLjE2LjE1Mi4yMAo= (used in the above example) DHCP Option Code Utility generating an IP Addresss from 172.16.152.20 yields: LW4gLWUgrBCYFAo= Encoding pxelinux.0 with the above methods likewise yields different encodings. I have tried using all three methods of encoding the data elements, but nothing seems to work i.e. my PXE boot clients do not get directed to my TFTP server. Can anyone help? Regards, Paul Adams.

    Read the article

< Previous Page | 99 100 101 102 103 104 105 106 107 108 109 110  | Next Page >