Search Results

Search found 25414 results on 1017 pages for 'default arguments'.

Page 42/1017 | < Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >

  • Where does a Quick-Time powered application store changes to the user interface?

    - by Luke
    I have downloaded an application written with the Quick Time library (for Windows 7). The application does not need an installation: just unzip it in a directory and run the program. It works, but I have a problem: the program allows the user to change a lot of values using its interface but does not have an option to reset them to their default values. What is more problematic is that when I exit the program and run it again, the interface still has the changed values. In the program directory there is no file that stores the changes done to the UI of the program. I suspect that Quick Time records these changes somewhere, but I can't find the right file. I have even deleted the application and re-unzipped it to another location - but the UI values still remain the same values changed by me!

    Read the article

  • Changing location in Google Chrome when searching

    - by Alex
    I've recently moved to the Czech Republic from Scotland and I can't find a way to permanently stop Google from automatically defaulting back to google.cz all the time. I've checked to ensure that all my google accounts and cookie based settings (e.g. Advanced Search Options) are set to English but it's still clearly doing an IP address lookup and disregarding everything else. The default Search Engine for Google Chrome (and switches to google.cz automatically): {google:baseURL}search?{google:RLZ}{google:acceptedSuggestion}{google:originalQueryForSuggestion}sourceid=chrome&ie={inputEncoding}&q=%s I've tried hardcoding it to: http://www.google.com/search?{google:RLZ}{google:acceptedSuggestion}{google:originalQueryForSuggestion}sourceid=chrome&ie={inputEncoding}&q=%s this kind of works, but won't work for inline searching, i.e. I always have to press enter in order to get any results which is a bit annoying as I've gotten so used to AJAX style searching I can't have been the only one to get this issue? Any help is appreciated

    Read the article

  • Anatomy of a .NET Assembly - Custom attribute encoding

    - by Simon Cooper
    In my previous post, I covered how field, method, and other types of signatures are encoded in a .NET assembly. Custom attribute signatures differ quite a bit from these, which consequently affects attribute specifications in C#. Custom attribute specifications In C#, you can apply a custom attribute to a type or type member, specifying a constructor as well as the values of fields or properties on the attribute type: public class ExampleAttribute : Attribute { public ExampleAttribute(int ctorArg1, string ctorArg2) { ... } public Type ExampleType { get; set; } } [Example(5, "6", ExampleType = typeof(string))] public class C { ... } How does this specification actually get encoded and stored in an assembly? Specification blob values Custom attribute specification signatures use the same building blocks as other types of signatures; the ELEMENT_TYPE structure. However, they significantly differ from other types of signatures, in that the actual parameter values need to be stored along with type information. There are two types of specification arguments in a signature blob; fixed args and named args. Fixed args are the arguments to the attribute type constructor, named arguments are specified after the constructor arguments to provide a value to a field or property on the constructed attribute type (PropertyName = propValue) Values in an attribute blob are limited to one of the basic types (one of the number types, character, or boolean), a reference to a type, an enum (which, in .NET, has to use one of the integer types as a base representation), or arrays of any of those. Enums and the basic types are easy to store in a blob - you simply store the binary representation. Strings are stored starting with a compressed integer indicating the length of the string, followed by the UTF8 characters. Array values start with an integer indicating the number of elements in the array, then the item values concatentated together. Rather than using a coded token, Type values are stored using a string representing the type name and fully qualified assembly name (for example, MyNs.MyType, MyAssembly, Version=1.0.0.0, Culture=neutral, PublicKeyToken=0123456789abcdef). If the type is in the current assembly or mscorlib then just the type name can be used. This is probably done to prevent direct references between assemblies solely because of attribute specification arguments; assemblies can be loaded in the reflection-only context and attribute arguments still processed, without loading the entire assembly. Fixed and named arguments Each entry in the CustomAttribute metadata table contains a reference to the object the attribute is applied to, the attribute constructor, and the specification blob. The number and type of arguments to the constructor (the fixed args) can be worked out by the method signature referenced by the attribute constructor, and so the fixed args can simply be concatenated together in the blob without any extra type information. Named args are different. These specify the value to assign to a field or property once the attribute type has been constructed. In the CLR, fields and properties can be overloaded just on their type; different fields and properties can have the same name. Therefore, to uniquely identify a field or property you need: Whether it's a field or property (indicated using byte values 0x53 and 0x54, respectively) The field or property type The field or property name After the fixed arg values is a 2-byte number specifying the number of named args in the blob. Each named argument has the above information concatenated together, mostly using the basic ELEMENT_TYPE values, in the same way as a method or field signature. A Type argument is represented using the byte 0x50, and an enum argument is represented using the byte 0x55 followed by a string specifying the name and assembly of the enum type. The named argument property information is followed by the argument value, using the same encoding as fixed args. Boxed objects This would be all very well, were it not for object and object[]. Arguments and properties of type object allow a value of any allowed argument type to be specified. As a result, more information needs to be specified in the blob to interpret the argument bytes as the correct type. So, the argument value is simple prepended with the type of the value by specifying the ELEMENT_TYPE or name of the enum the value represents. For named arguments, a field or property of type object is represented using the byte 0x51, with the actual type specified in the argument value. Some examples... All property signatures start with the 2-byte value 0x0001. Similar to my previous post in the series, names in capitals correspond to a particular byte value in the ELEMENT_TYPE structure. For strings, I'll simply give the string value, rather than the length and UTF8 encoding in the actual blob. I'll be using the following enum and attribute types to demonstrate specification encodings: class AttrAttribute : Attribute { public AttrAttribute() {} public AttrAttribute(Type[] tArray) {} public AttrAttribute(object o) {} public AttrAttribute(MyEnum e) {} public AttrAttribute(ushort x, int y) {} public AttrAttribute(string str, Type type1, Type type2) {} public int Prop1 { get; set; } public object Prop2 { get; set; } public object[] ObjectArray; } enum MyEnum : int { Val1 = 1, Val2 = 2 } Now, some examples: Here, the the specification binds to the (ushort, int) attribute constructor, with fixed args only. The specification blob starts off with a prolog, followed by the two constructor arguments, then the number of named arguments (zero): [Attr(42, 84)] 0x0001 0x002a 0x00000054 0x0000 An example of string and type encoding: [Attr("MyString", typeof(Array), typeof(System.Windows.Forms.Form))] 0x0001 "MyString" "System.Array" "System.Windows.Forms.Form, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" 0x0000 As you can see, the full assembly specification of a type is only needed if the type isn't in the current assembly or mscorlib. Note, however, that the C# compiler currently chooses to fully-qualify mscorlib types anyway. An object argument (this binds to the object attribute constructor), and two named arguments (a null string is represented by 0xff and the empty string by 0x00) [Attr((ushort)40, Prop1 = 12, Prop2 = "")] 0x0001 U2 0x0028 0x0002 0x54 I4 "Prop1" 0x0000000c 0x54 0x51 "Prop2" STRING 0x00 Right, more complicated now. A type array as a fixed argument: [Attr(new[] { typeof(string), typeof(object) })] 0x0001 0x00000002 // the number of elements "System.String" "System.Object" 0x0000 An enum value, which is simply represented using the underlying value. The CLR works out that it's an enum using information in the attribute constructor signature: [Attr(MyEnum.Val1)] 0x0001 0x00000001 0x0000 And finally, a null array, and an object array as a named argument: [Attr((Type[])null, ObjectArray = new object[] { (byte)2, typeof(decimal), null, MyEnum.Val2 })] 0x0001 0xffffffff 0x0001 0x53 SZARRAY 0x51 "ObjectArray" 0x00000004 U1 0x02 0x50 "System.Decimal" STRING 0xff 0x55 "MyEnum" 0x00000002 As you'll notice, a null object is encoded as a null string value, and a null array is represented using a length of -1 (0xffffffff). How does this affect C#? So, we can now explain why the limits on attribute arguments are so strict in C#. Attribute specification blobs are limited to basic numbers, enums, types, and arrays. As you can see, this is because the raw CLR encoding can only accommodate those types. Special byte patterns have to be used to indicate object, string, Type, or enum values in named arguments; you can't specify an arbitary object type, as there isn't a generalised way of encoding the resulting value in the specification blob. In particular, decimal values can't be encoded, as it isn't a 'built-in' CLR type that has a native representation (you'll notice that decimal constants in C# programs are compiled as several integer arguments to DecimalConstantAttribute). Jagged arrays also aren't natively supported, although you can get around it by using an array as a value to an object argument: [Attr(new object[] { new object[] { new Type[] { typeof(string) } }, 42 })] Finally... Phew! That was a bit longer than I thought it would be. Custom attribute encodings are complicated! Hopefully this series has been an informative look at what exactly goes on inside a .NET assembly. In the next blog posts, I'll be carrying on with the 'Inside Red Gate' series.

    Read the article

  • Question About DateCreated and DateModified Columns - SQL Server

    - by user311509
    CREATE TABLE Customer ( customerID int identity (500,20) CONSTRAINT . . dateCreated datetime DEFAULT GetDate() NOT NULL, dateModified datetime DEFAULT GetDate() NOT NULL ); When i insert a record, dateCreated and dateModified gets set to default date/time. When i update/modify the record, dateModified and dateCreated remains as is? What should i do? Obviously, i need to dateCreated value to remain as was inserted the first time and dateModified keeps changing when a change/modification occurs in the record fields. In other words, can you please write a sample quick trigger? I don't know much yet...

    Read the article

  • Stop then restart submit() after ajax check

    - by jeerose
    I asked a question [here] recently and it's just not providing me with an answer. Here's what I want to do and you can see my first attempt at the link above: User submits form Stop default submit action check to see if a similar entry exists in database If it does, display a notice asking them if they want to submit anyway and give an option to let them submit anyway (enable default action and submit it). If it does not, enable the default action on the form and let it submit I'm at a loss. Any help is appreciated. Thanks gang.

    Read the article

  • the difference of String.valueOf(char) and +

    - by Will Yu
    to show the default value of char ,I wrote code like this: public class TestChar { static char a; public static void main(String[] args) { System.out.println("."+String.valueOf(a)+"."); System.out.println("the default char is "+a); } } but the console output is confused.the first is ". ." ,however the second is "the default char is [](like this ,I don't know how to describe it.)" why?thanks for help

    Read the article

  • Change default mouse cursor on OSX Mavericks

    - by Ziarno
    Is there any way to change the default mouse cursor on OSX Mavericks? More precisely, I'd like to make it look like in windows, and even more precisely I want to move the 'point' where the mouse cursor actually 'clicks' - on windows it's on the very end of the arrow, even outside of it, and on mac it's a little inside the arrow. I've done some research, and all the websites either tell me to get Mighty Mouse, which doesn't work on my system, and other's tell me how to change the size of my cursor.

    Read the article

  • Default Text Color in Apple Mail.app

    - by Axeva
    Is there any way to set the default font color for new messages in Mail.app? It's trivial to set the actual font, and text size. I cannot seem to get the application to change the text color though. It always defaults to black. After 5 or 6 major revisions of OS X, surely someone has thought of this, right?

    Read the article

  • Why would sshd allow root logins by default?

    - by The Journeyman geek
    I'm currently working on hardening my servers against hacking- amongst other things, i'm getting a load of attempts to log on as root over ssh. While i've implemented fail2ban, i'm wondering, why root logons would be allowed by default to start with? Even with non sudo based distros, i can always log on as a normal user and switch - so i'm wondering is there any clear advantage to allowing root logons on ssh, or it just something no one bothers to change?

    Read the article

  • domain is pointing to default static page on server but settings look correct

    - by Cues
    I have edited my apache vhost file in /etc/apache2/sites-enabled to add the following: <VirtualHost *:80> ServerName www.mysite.cn ServerAlias mysite.cn *.mysite.cn DocumentRoot /home/user/static/mysite/cn </VirtualHost> It still points to the default site on the server when i browse to mysite.cn but when i enter anything along the lines of ww3.mysite.cn it point to the new correct document root any clues of what the problem could be as i am lost.

    Read the article

  • How to change default runlevel of Ubuntu (lucid) ?

    - by Adnan
    I have Ubuntu lucid on my home computer. Today I was experimenting with runlevels and I couldn't figure out how to change the default run level of Ubuntu. I can do that using /etc/inittab on Debian 504 but that file is not there in Ubuntu. I have searched on the web but couldn't figure out the answer. Thanks in advance.

    Read the article

  • SQL Server 2008 R2 and copy-only default value in SQL Server Management Studio

    - by user102718
    We are using Tivoli Storage Manager for taking backups of the database but sometimes our consultants need to take separate backup copies of the database using Management Studio. If they forget to mark the "copy-only" flag in Management Studio they will mess up the Tivoli's backups (we are running our databases in FULL-recovery mode). Is there a way to set the default value of the Copy-Only flag to true in the Management Studio's "Back Up Database"-window?

    Read the article

  • default gateway and tracert

    - by chappar
    when i do "route print" on my machine, it shows my default gateway as 10.225.150.1 but when i do tracert www.google.com, the first machine it reaches is 10.225.150.2. The route command does not show any entry for 10.225.150.2. So, why tracert is showing 10.225.150.2? my maching is on Windows XP.

    Read the article

  • How to set the default language in Notepad++

    - by AngryHacker
    I mostly use Notepad++ for dealing with XML files. It would be good if Notepad++ parsed and colorized my files based on the XML language when I open the files. Instead, I have to open the file, pick XML from the Languages menu. Is there a way to tell Notepad++ that XML is the default language and to treat the files accordingly.

    Read the article

  • Deleted user default database - SQL Server 2008

    - by RadiantHex
    Hi folks, I cannot connect to the database anymore, I'm getting: Cannot open user default database. Login failed. I have deleted the database during a previous session and then tried to recreate it. But the recreate failed. Now I am stuck with this error, what can I do? Edit: I'm using Windows Authentication Any ideas? Fixed: use the command: sqlcmd -E -d master then type: ALTER LOGIN [Your Windows Login] WITH DEFAULT_DATABASE=master GO :)

    Read the article

  • don't want Folx become my default downloader

    - by Am1rr3zA
    Hi every one, I install Folx download manager in my macbookPro and every time I want to download a link in safari in force me to download with folx, how can I set setting that let me choose the downloader (default safari downloader or folx)? can any one introduce better free downloader than Folx for OSX?

    Read the article

  • DHCP server with multiple interfaces on ubuntu, destroys default gateway

    - by Henrik Kjus Alstad
    I use Ubuntu, and I have many interfaces. eth0, which is my internet connection, and it gets its info from a DHCP-server totally outisde of my control. I then have eth1,eth2,eth3 and eth4 which I have created a DHCP-server for.(ISC DHCP-Server) It seems to work, and I even get an IP-address from the foreign DHCP-server on the internet facing interface. However, for some reason it seems my gateway for eth0 became screwed after I installed my local DHCP-server for eth1-eth4. (I think so because I got an IP for eth0, and I can ping other stuff on the local network, but I cannot get access to the internet). My eth0-specific info in /etc/network/interfaces: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp auto eth1 iface eth1 inet static address 10.0.1.1 netmask 255.255.255.0 network 10.0.1.0 broadcast 10.0.1.255 gateway 10.0.1.1 mtu 8192 auto eth2 iface eth2 inet static address 10.0.2.1 netmask 255.255.255.0 network 10.0.2.0 broadcast 10.0.2.255 gateway 10.0.2.1 mtu 8192 My /etc/default/isc-dhcp-server: INTERFACES="eth1 eth2 eth3 eth4" So why does my local DHCP-server fuck up the gateway for eth0, when I tell it not to listen to eth0? Anyone see the problem or what I can do to fix it? The problem seems indeed to be the gateways. "netstat -nr" gives: 0.0.0.0 --- 10.X.X.X ---- 0.0.0.0 --- UG 0 0 0 eth3 It should have been 0.0.0.0 129.2XX.X.X 0.0.0.0 UG 0 0 0 eth0 So for some reason, my local DHCP-server overrides the gateway I get from the network DHCP. Edit: dhcp.conf looks like this(I included info only for eth1 subnet): ddns-update-style none; not authoritative; subnet 10.0.1.0 netmask 255.255.255.0 { interface eth1; option domain-name "example.org"; option domain-name-servers ns1.example.org, ns2.example.org; default-lease-time 600; max-lease-time 7200; range 10.0.1.10 10.0.1.100; host camera1_1 { hardware ethernet 00:30:53:11:24:6E; fixed-address 10.0.1.10; } host camera2_1 { hardware ethernet 00:30:53:10:16:70; fixed-address 10.0.1.11; } } Also, it seems that the gateway is correctly set if I run "/etc/init.d/networking restart" in a terminal, but that's not helpful for me, I need the correct gateway to be set during startup, and i'd rather find the source of the problem

    Read the article

  • Removing default mysql on Slackware 13

    - by bullettime
    I was playing around the default mysql that comes with Slackware 13, and I think I broke it somehow. I don't want to fix it, I'd like to start from scratch, building from source and everything, but first I have to remove this broken installation. How can I do this?

    Read the article

  • Setting default working directory/drive in Emacs shell on Windows

    - by Victor K.
    Hello, how can I change a default working directory/drive for shell in Emacs (on Windows)? Normally, shell starts in the same directory as the file in current buffer. However, when my current file is on D: drive, it starts in c:. Manually changing drive to D: in shell brings me to my directory of course, but I want to avoid this extra step. Is it possible?

    Read the article

< Previous Page | 38 39 40 41 42 43 44 45 46 47 48 49  | Next Page >