Search Results

Search found 12793 results on 512 pages for 'format specifiers'.

Page 384/512 | < Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >

  • SEO to ensure visibility for a narrow, non-competitive, non-commercial site

    - by hen3ry
    I'm webmaster of a non-commercial site in English. A non-native-English speaker asked me why our site doesn't produce hits in Google searches she conducts for relevant keywords in her native language. I asked her for a list of keywords in her native language, and I naively tried inserting those into the META info in the page headers and waited a couple of weeks. No help. A little searching informed me that Google doesn't use the META info, and has not done so for a very long time. D'oh! To be entirely concrete, suppose the StackExchange folks want Russian speakers to find this site, Pro Webmasters. The direct translation in Russian of "webmaster" --thanks, Google Translator-- is: "?????????". (Not sure this will render properly, but that's not essential to my question.) Assuming Pro Webmasters has a common template for all pages it generates, inserting "?????????" into the Keywords META for that template won't help, it seems. What could StackExchange do to make this site visible to users searching with the Russian keyword "?????????" ? Pretty much all the advice I've seen boils down to this, if I understand correctly: use the desired search term often (but not too often) among site content, and the problem will be solved. That's great, but I don't think sprinkling "?????????" visibly all over Pro Webmasters is going to fly. Just for completeness, crawlers must be long immune to the invisible-to-visitors scheme, e.g, format "?????????" in a tiny text size in a color the same as an existing background, e.g. white-over-white. Or, put that text inside a div styled: ' style="visibility: hidden" '. Probably some other equivalents. I can only think of one slightly effective method, along these lines: place an unobtrusive link on the common template to a page titled "for international users" , and on that page list desired synonyms for "webmaster" in various languages on that page. A test case --admittedly, just one-- using my site implies that a Google search for "international users" ????????? will produce a hit for this page, and thus make the site minimally visible, despite the fact that the page will almost never be visited. At the moment, anyway. Note: All the SEO discussions I have found so far are about competitive and --almost certainly-- commercial sites. To repeat: my site is non-commercial, and it is about an obscure, narrow topic that is of interest to only a small number of people worldwide. This isn't about clawing our way to the top of competitive rankings, just making this content minimally visible to interested non-native-English speakers. Ideas? TIA

    Read the article

  • Wrapping REST based Web Service

    - by PaulPerry
    I am designing a system that will be running online under Microsoft Windows Azure. One component is a REST based web service which will really be a wrapper (using proxy pattern) which calls the REST web services of a business partner, which has to do with BLOB storage (note: we are not using azure storage). The majority of the functionality will be taking a request, calling our partner web service, receiving the request and then passing that back to the client. There are a number of reasons for doing this, but one of the big ones is that we are going to support three clients: our desktop application (win and mac), mobile apps (iOS), and a web front end. Having a single API which we then send to our partner protects us if that partner ever changes. I want our service to support both JSON and XML for the data transfer format, JSON for web and probably XML for the desktop and mobile (we already have an XML parser in those products). Our partner also supports both of these formats. I was planning on using ASP.NET MVC 4 with the Web API. As I design this, the thing that concerns me is the static type checking of C#. What if the partner adds or removes elements from the data? We can probably defensively code for that, but I still feel some concern. Also, we have to do a fair amount of tedious coding, to setup our API and then to turn around and call our partner’s API. There probably is not much choice on it though. But, in the back of my mind I wonder if maybe a more dynamic language would be a better choice. I want to reach out and see if anybody has had to do this before, what technology solutions they have used to (I am not attached to this one, these days Azure can host other technologies), and if anybody who has done something like this can point out any issues that came up. Thanks! Researching the issue seems to only find solutions which focus on connecting a SOAP web service over a proxy server, and not what I am referring to here. Note: Cross posted (by suggestion) from http://stackoverflow.com/questions/11906802/wrapping-rest-based-web-service Thank you!

    Read the article

  • How to handle this unfortunately non hypothetical situation with end-users?

    - by User Smith
    I work in a medium sized company but with a very small IT force. Last year (2011), I wrote an application that is very popular with a large group of end-users. We hit a deadline at the end of last year and some functionality (I will call funcA from now on) was not added into the application that was wanted at the very end. So, this application has been running in live/production since the end of 2011, I might add without issue. Yesterday, a whole group of end-users started complaining that funcA that was never in the application is no longer working. Our priority at this company is that if an application is broken it must be fixed first prior to prioritized projects. I have compared code and queries and there is no difference since 2011, which is proofA. I then was able to get one of the end-users to admit that it never worked proofB, but since then that end-user has went back and said that it was working previously......I believe the horde of end-users has assimilated her. I have also reviewed my notes for this project which has requirements and daily updates regarding the project which specifically states, "funcA not achieved due to time constraints", proofC. I have spoken with many of them and I can see where they could be confused as they are very far from a programming background, but I also know they are intelligent enough to act in a group in order to bypass project prioritization orders in order to get functionality that they want to make their job easier. The worst part is is that now group think is setting in and my boss and the head of IT is actually starting to believe them, even though there is no code or query changes. As far as reviewing the state of the logic it is very cut and dry to the point of if 1 = 1, funcA will not work. So, this is the end of the description of my scenario, but I am trying not to get severally dinged on my performance metrics due to this which would essentially have me moved to fixing a production problem that doesn't exist that will probably take over 1 month. I am looking for direct answers to this question. This question is not for rants, polling, or discussions as this is not the format for StackExchange. Please don't downvote me too terribly it is pretty common on this specific site of stack, I am looking for honest answers to this situation and I couldn't find a forum more appropriate.

    Read the article

  • Why doesn't my texture display with this GLSL shader?

    - by Chewy Gumball
    I am trying to display a DXT1 compressed texture on a quad using a VBO and shaders, but I have been unable to get it working. All I get is a black square. I know my texture is uploaded properly because when I use immediate mode without shaders the texture displays fine but I will include that part just in case. Also, when I change the gl_FragColor to something like vec4 (0.0, 1.0, 1.0, 1.0) then I get a nice blue quad so I know that my shader is able to set the colour. It appears to be either the texture is not being bound correctly in the shader or the texture coordinates are not being picked up. However, I can't find the error! What am I doing wrong? I am using OpenTK in C# (not xna). Vertex Shader: void main() { gl_TexCoord[0] = gl_MultiTexCoord0; // Set the position of the current vertex gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } Fragment Shader: uniform sampler2D diffuseTexture; void main() { // Set the output color of our current pixel gl_FragColor = texture2D(diffuseTexture, gl_TexCoord[0].st); //gl_FragColor = vec4 (0.0,1.0,1.0,1.0); } Drawing Code: int vb, eb; GL.GenBuffers(1, out vb); GL.GenBuffers(1, out eb); // Position Texture float[] verts = { 0.1f, 0.1f, 0.0f, 0.0f, 0.0f, 1.9f, 0.1f, 0.0f, 1.0f, 0.0f, 1.9f, 1.9f, 0.0f, 1.0f, 1.0f, 0.1f, 1.9f, 0.0f, 0.0f, 1.0f }; uint[] indices = { 0, 1, 2, 0, 2, 3 }; //upload data to the VBO GL.BindBuffer(BufferTarget.ArrayBuffer, vb); GL.BindBuffer(BufferTarget.ElementArrayBuffer, eb); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(verts.Length * sizeof(float)), verts, BufferUsageHint.StaticDraw); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(uint)), indices, BufferUsageHint.StaticDraw); //Upload texture int buffer = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, buffer); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (float)TextureWrapMode.Repeat); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (float)TextureMagFilter.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (float)TextureMinFilter.Linear); GL.TexEnv(TextureEnvTarget.TextureEnv, TextureEnvParameter.TextureEnvMode, (float)TextureEnvMode.Modulate); GL.CompressedTexImage2D(TextureTarget.Texture2D, 0, texture.format, texture.width, texture.height, 0, texture.data.Length, texture.data); //Draw GL.UseProgram(shaderProgram); GL.EnableClientState(ArrayCap.VertexArray); GL.EnableClientState(ArrayCap.TextureCoordArray); GL.VertexPointer(3, VertexPointerType.Float, 5 * sizeof(float), 0); GL.TexCoordPointer(2, TexCoordPointerType.Float, 5 * sizeof(float), 3); GL.ActiveTexture(TextureUnit.Texture0); GL.Uniform1(GL.GetUniformLocation(shaderProgram, "diffuseTexture"), 0); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0);

    Read the article

  • Unity Dash and top toolbar won't open after updating to 12.10

    - by pgrytdal
    Today, I updated to Ubuntu 12.10. After re-starting, like the updater suggested, the toolbar on the top of the screen, and the dash won't load. I seem to be missing other features, as well, like alttab to switch windows, etc. I am able to access the Terminal, by typing CtrlAltT, which is how I was bale to access Firefox. How do I fix this problem? Edit: 2:10 PM on 10/19/12 As Chris Carter suggested, I'm including the results of the teminal command lspci (Sorry... I dont know how to format between Back-tics): 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:01.0 PCI bridge: Acer Incorporated [ALI] AMD RS780/RS880 PCI to PCI bridge (int gfx) 00:04.0 PCI bridge: Advanced Micro Devices [AMD] RS780/RS880 PCI to PCI bridge (PCIE port 0) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:07.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 3) 00:11.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 SATA Controller [AHCI mode] 00:12.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:12.1 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0 USB OHCI1 Controller 00:12.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB OHCI0 Controller 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 USB EHCI Controller 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 3a) 00:14.2 Audio device: Advanced Micro Devices [AMD] nee ATI SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB7x0/SB8x0/SB9x0 LPC host controller 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge 00:18.0 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor HyperTransport Configuration (rev 40) 00:18.1 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] Family 11h Processor Link Control 01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RS780M/RS780MN [Mobility Radeon HD 3200 Graphics] 01:05.1 Audio device: Advanced Micro Devices [AMD] nee ATI RS780 HDMI Audio [Radeon HD 3000-3300 Series] 03:00.0 Ethernet controller: Broadcom Corporation NetLink BCM5784M Gigabit Ethernet PCIe (rev 10) 09:00.0 Network controller: Atheros Communications Inc. AR928X Wireless Network Adapter (PCI-Express) (rev 01)

    Read the article

  • New Worklist features on 12.1.3

    - by Vijay Shanmugam
    Following new Worklist features are available on E-Business Suite 12.1.3 via Patch 13646173. Ability to view comments on top of a notification If an action is performed on a notification such as Reassign, Request for Information or Provide Information, the recipient of the notification will see who performed the last action and the associated comment on top of the notification. Reassigning a request for information notification If an approver requests more information on a notification from it's submitter, the submitter now has two options Answer Request for More Information Transfer Request for More Information If the submitter thinks the requested information can be provided by another user, he/she can transfer the request to the other user. Please note that only Transfer is supported for Request for More Information. Once transferred, the submitter cannot access the notification and provide the requested information. Use actual sent date when reassigning a notification The Sent field in notification header always showed the date on which the notification was first created. If the notification was later reassigned, the Sent date was not updated to show the last action date. This caused problems in following scenario Approval notification was sent to JACK on 01-JAN-2012 JACK waited for 10 days before reassigning to JILL on 10-JAN-2012 JILL does not see the notification as sent on 10-JAN-2012, instead sees it as sent on 01-JAN-2012 Although the notification was originally created on 01-JAN-2012, it was sent to JILL only on 10-JAN-2012 The enhancement now shows the correct sent date in Worklist and Notification Details page. Figure 1 - Depicts all the above 3 features Related Action History for response required notification So far it was possible to embed Action History of an response-required notification into another FYI notification using #RELATED_HISTORY attribute (Please refer to Workflow Developer Guide for details about this attribute). The enhancement now enables developers to embed Action History of one response-required notification into another response-required notification. To embed Action History of one response-required notification into another, create message attribute #RELATED_HISTORY. To this attribute set a value during run-time in the following format. {TITLE}[ITEM_TYPE:ITEM_KEY]PROCESS_NAME:ACTIVITY_LABEL_NAMEThe TITLE, ITEM_TYPE and ITEM_KEY are optional values. TITLE is used as Related Action History header title. If TITLE is not present, then a default title "Related Action History" is shown. If ITEM_TYPE is present and ITEM_KEY is not, For Example: {TITLE}[ITEM_TYPE]PROCESS_NAME:ACTIVITY_LABEL_NAME , the Related Action History is populated from parent item type of the current item. If both ITEM_TYPE and ITEM_KEY is present, For Example: {TITLE}[ITEM_TYPE:ITEM_KEY]PROCESS_NAME:ACTIVITY_LABEL_NAME , the Related Action History is populated from that specific instance activity. Figure 2 - Depicts Related Action History feature

    Read the article

  • With AMD style modules in JavaScript is there any benefit to namespaces?

    - by gman
    Coming from C++ originally and seeing lots of Java programmers doing the same we brought namespaces to JavaScript. See Google's closure library as an example where they have a main namespace, goog and under that many more namespaces like goog.async, goog.graphics But now, having learned the AMD style of requiring modules it seems like namespaces are kind of pointless in JavaScript. Not only pointless but even arguably an anti-pattern. What is AMD? It's a way of defining and including modules that removes all direct dependencies. Effectively you do this // some/module.js define([ 'name/of/needed/module', 'name/of/someother/needed/module', ], function( RefToNeededModule, RefToSomeOtherNeededModule) { ...code... return object or function }); This format lets the AMD support code know that this module needs name/of/needed/module.js and name/of/someother/needed/module.js loaded. The AMD code can load all the modules and then, assuming no circular dependencies, call the define function on each module in the correct order, record the object/function returned by the module as it calls them, and then call any other modules' define function with references to those modules. This seems to remove any need for namespaces. In your own code you can call the reference to any other module anything you want. For example if you had 2 string libraries, even if they define similar functions, as long as they follow the AMD pattern you can easily use both in the same module. No need for namespaces to solve that. It also means there's no hard coded dependencies. For example in Google's closure any module could directly reference another module with something like var value = goog.math.someMathFunc(otherValue) and if you're unlucky it will magically work where as with AMD style you'd have to explicitly include the math library otherwise the module wouldn't have a reference to it since there are no globals with AMD. On top of that dependency injection for testing becomes easy. None of the code in the AMD module references things by namespace so there is no hardcoded namespace paths, you can easily mock classes at testing time. Is there any other point to namespaces or is that something that C++ / Java programmers are bringing to JavaScript that arguably doesn't really belong?

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • Should EICAR be updated to test the revision of Antivirus system?

    - by makerofthings7
    I'm posting this here since programmers write viruses, and AV software. They also have the best knowledge of heuristics and how AV systems work (cloaking etc). The EICAR test file was used to functionally test an antivirus system. As it stands today almost every AV system will flag EICAR as being a "test" virus. For more information on this historic test virus please click here. Currently the EICAR test file is only good for testing the presence of an AV solution, but it doesn't check for engine file or DAT file up-to-dateness. In other words, why do a functional test of a system that could have definition files that are more than 10 years old. With the increase of zero day threats it doesn't make much sense to functionally test your system using EICAR. That being said, I think EICAR needs to be updated/modified to be effective test that works in conjunction with an AV management solution. This question is about real world testing, without using live viruses... which is the intent of the original EICAR. That being said I'm proposing a new EICAR file format with the appendage of an XML blob that will conditionally cause the Antivirus engine to respond. X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-EXTENDED-ANTIVIRUS-TEST-FILE!$H+H* <?xml version="1.0"?> <engine-valid-from>2010-1-1Z</engine-valid-from> <signature-valid-from>2010-1-1Z</signature-valid-from> <authkey>MyTestKeyHere</authkey> In this sample, the antivirus engine would only alert on the EICAR file if both the signature or engine file is equal to or newer than the valid-from date. Also there is a passcode that will protect the usage of EICAR to the system administrator. If you have a backgound in "Test Driven Design" TDD for software you may get that all I'm doing is applying the principals of TDD to my infrastructure. Based on your experience and contacts how can I make this idea happen?

    Read the article

  • Reading a ZFS USB drive with Mac OS X Mountain Lion

    - by Karim Berrah
    The problem: I'm using a MacBook, mainly with Solaris 11, but something with Mac OS X (ML). The only missing thing is that Mac OS X can't read my external ZFS based USB drive, where I store all my data. So, I decided to look for a solution. Possible solution: I decided to use VirtualBox with a Solaris 11 VM as a passthrough to my data. Here are the required steps: Installing a Solaris 11 VM Install VirtualBox on your Mac OS X, add the extension pack (needed for USB) Plug your ZFS based USB drive on your Mac, ignore it when asked to initialize it. Create a VM for Solaris (bridged network), and before installing it, create a USB filter (in the settings of your Vbox VM, go to Ports, then USB, then add a new USB filter from the attached device "grey usb-connector logo with green plus sign")  Install a Solaris 11 VM, boot it, and install the Guest addition check with "ifconfg -a" the IP address of your Solaris VM Creating a path to your ZFS USB drive In MacOS X, use the "Disk Utility" to unmount the USB attached drive, and unplug the USB device. Switch back to VirtualBox, select the top of the window where your Solaris 11 is running plug your ZFS USB drive, select "ignore" if Mac OS invite you to initialize the disk In the VirtualBox VM menu, go to "Devices" then "USB Devices" and select from the dropping menu your "USB device" Connection your Solaris VM to the USB drive Inside Solaris, you might now check that your device is accessible by using the "format" cli command If not, repeat previous steps Now, with root privilege, force a zpool import -f myusbdevicepoolname because this pool was created on another system check that you see your new pool with "zpool status" share your pool with NFS: share -F NFS /myusbdevicepoolname Accessing the USB ZFS drive from Mac OS X This is the easiest step: access an NFS share from mac OS Create a "ZFSdrive" folder on your MacOS desktop from a terminal under mac OS: mount -t nfs IPadressofMySoalrisVM:/myusbdevicepoolname  /Users/yourusername/Desktop/ZFSdrive et voila ! you might access your data, on a ZFS USB drive, directly from your Mountain Lion Desktop. You might play with the share rights in order to alter any read/write rights as needed. You might activate compression, encryption inside the Solaris 11 VM ...

    Read the article

  • Fail to start windows after Ubuntu 11.10 install

    - by user49995
    Computer: HP Pavilion dv7-6140eo OS: Originally Win7 I recently decided to try out Ubuntu, and I decided to dual-boot it with Windows 7. First I googled some how-to's, then I downloaded Ubuntu onto a memory stick and made a second partition (I originally only had one partition that I shrunk and used the unallocated space to install onto during the Ubuntu install). During the install I set format type to xt4 (or something, it was the default option), chose the "in the beginning" option and set the last option as "\". The install was successful. Although, when I restarted my computer I weren't able to choose which operating system to start; it went right into windows. After showing the windows logo for half a second before rebooting, I get a blue screen (see bottom of the page). Trying to fix it, I deleted the newly made partition I had just installed Ubuntu onto (seeing it wasn't working either). This made no difference. I proceeded with installing Ubuntu again, so I would at least have a functioning computer, and now Ubuntu works fine (on it now). The only difference on start-up is that I get a Grub window asking me to between several options including Linux and Windows 7 (loader). Now, if I choose Windows 7, I get the message "Windows was unable to start. A recent software or hardware change might be the cause". It recommends me to choose the first option of the two it provides; to start start-up repair tool. The second option being starting windows normally. If I start windows normally, the same thing happens as earlier. My computer does not have a windows installation CD. Although, it has (at least it used to, if I haven't screwed that too up) a 17gb recovery partition. In addition I made an image of the computer onto a external hard drive when I first got it. Though, I have no idea how to use either. If anyone has any idea how I can make windows work again or reinstall it (already backed up my files) it would be greatly appreciated. I still prefer to dual boot between the two functioning operating systems, but I will settle for a functioning windows 7. Thanks a lot for any replies. Blue screen: A problem has been detected and Windows has been shut down to prevent damage to your computer. If this is the first time you've seen this Stop error screen, restart your computer. If this screen appears again, follow these steps: Check for viruses on your computer. Remove and newly installed hard drives or hard drive controllers. Check your hard drive to make sure it is properly configures and terminated. Run CMKDSK /F to check for hard drive corruption, and then restart your computer. Technical information: **STOP: 0x0000007B (0xFFFFF880009A97E8,0xFFFFFFFFC0000034, 0x0000000000000000,0x0000000000000000

    Read the article

  • New ZFS Encryption features in Solaris 11.1

    - by darrenm
    Solaris 11.1 brings a few small but significant improvements to ZFS dataset encryption.  There is a new readonly property 'keychangedate' that shows that date and time of the last wrapping key change (basically the last time 'zfs key -c' was run on the dataset), this is similar to the 'rekeydate' property that shows the last time we added a new data encryption key. $ zfs get creation,keychangedate,rekeydate rpool/export/home/bob NAME PROPERTY VALUE SOURCE rpool/export/home/bob creation Mon Mar 21 11:05 2011 - rpool/export/home/bob keychangedate Fri Oct 26 11:50 2012 local rpool/export/home/bob rekeydate Tue Oct 30 9:53 2012 local The above example shows that we have changed both the wrapping key and added new data encryption keys since the filesystem was initially created.  If we haven't changed a wrapping key then it will be the same as the creation date.  It should be obvious but for filesystems that were created prior to Solaris 11.1 we don't have this data so it will be displayed as '-' instead. Another change that I made was to relax the restriction that the size of the wrapping key needed to match the size of the data encryption key (ie the size given in the encryption property).  In Solaris 11 Express and Solaris 11 if you set encryption=aes-256-ccm we required that the wrapping key be 256 bits in length.  This restriction was unnecessary and made it impossible to select encryption property values with key lengths 128 and 192 when the wrapping key was stored in the Oracle Key Manager.  This is because currently the Oracle Key Manager stores AES 256 bit keys only.  Now with Solaris 11.1 this restriciton has been removed. There is still one case were the wrapping key size and data encryption key size will always match and that is where they keysource property sets the format to be 'passphrase', since this is a key generated internally to libzfs and to preseve compatibility on upgrade from older releases the code will always generate a wrapping key (using PKCS#5 PBKDF2 as before) that matches the key length size of the encryption property. The pam_zfs_key module has been updated so that it allows you to specify encryption=off. There were also some bugs fixed including not attempting to load keys for datasets that are delegated to zones and some other fixes to error paths to ensure that we could support Zones On Shared Storage where all the datasets in the ZFS pool were encrypted that I discussed in my previous blog entry. If there are features you would like to see for ZFS encryption please let me know (direct email or comments on this blog are fine, or if you have a support contract having your support rep log an enhancement request).

    Read the article

  • what's wrong with my sitemap?

    - by cadrian
    Google Webmaster Tools says that my sitemap is in a wrong format. I don't see my mistake, I think I followed every guideline provided by Google. Can someone help me? <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url> <loc>http://www.vocalcontraste.fr/</loc> </url> <url> <loc>http://www.vocalcontraste.fr/presentation/</loc> <lastmod>2012-01-30T21:49:06+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/les-concerts/</loc> <lastmod>2012-12-13T21:55:00+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/ecoutez-contraste/</loc> <lastmod>2012-07-13T18:19:45+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/repertoire/</loc> <lastmod>2012-07-13T17:30:14+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/la-presse/</loc> <lastmod>2012-07-11T08:17:48+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/recrutement/</loc> <lastmod>2012-02-01T22:22:03+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/nos-partenaires/</loc> <lastmod>2012-07-11T07:43:31+01:00</lastmod> </url> <url> <loc>http://www.vocalcontraste.fr/contactez-nous/</loc> <lastmod>2012-02-02T19:01:32+01:00</lastmod> </url> </urlset>

    Read the article

  • How to Add a Business Card Image to a Signature in Outlook 2013 Without the vCard (.vcf) File

    - by Lori Kaufman
    When you add a business card to a signature, an image of the business card is inserted into the signature and the vCard (.vcf) file is attached. If you don’t want to attach the vCard file, you can insert the image only into your signature. To insert only the image of your business card without the .vcf file, click People on the Navigation Bar at the bottom of the Outlook window. To get a business card image we can use, we must view the contacts in any form other than People, so we can open the full contact editing window. To do this, click on a different view in the Current View section of the Home tab. We chose to view our contacts in the Business Card format. Double-click on your contact in the current view. The full contact editing window displays with an image of the business card on the right. Right-click on the business card image and select Copy Image from the popup menu. To close the contact editing window, click the File tab and click Close in the menu list on the left. NOTE: You can also click the X in the upper, right corner of the contact editing window to close it. To open the signature editor, click the File tab. Click Options in the menu list on the left side of the Account Information screen. On the Outlook Options dialog box, click Mail in the list of options on the left side of the dialog box. On the Mail screen, click Signatures in the Compose messages section. NOTE: You can also access the Signatures and Stationery dialog box from the Message window for new emails and drafts. Click New Email on the Home tab or double-click an email in the Drafts folder to access the Message window. For more information, see our article about assigning a default signature. In the signature editor, right-click and select Paste from the popup menu. The image is inserted into the signature. You can also use this method to copy a business card image for use in other documents and programs. It’s also possible to insert the vCard (.vcf) file into a signature without the image. We’ll cover that topic tomorrow.     

    Read the article

  • Suggestions for Summer Intern Application Assignments

    - by orangepips
    As part of our application process we want prospective college interns to complete an assignment on their own - either programming or analytical - to give us something tangible to evaluate such as code or a flowchart. I have two ideas for these assignments, one programming and one analytical, I am interested in gathering feedback about these. Programming Assignment Generate an a month's calendar for a given date. The first row should indicate the days of the week (e.g. Sunday - Saturday). Each subsequent row should contain a week's days. The date supplied should be highlighted (e.g. bolded). I am thinking we'll probably proscribe the output format even more strictly - probably down to what the HTML source should look like including CSS classes. Thinking is this forces answerers to actually do some work if they merely copy a solution from the internet. Analytical Assignment Diagram or describe in prose a system for managing a set of traffic lights for traffic at a four way intersection. Each direction (i.e. North, South, East and West) has two lanes (i.e. right and left). The left lane is turn only and has green arrow light to indicate right of way. The system is able to detect if lanes have cars in them and change the lights accordingly. I would expect a flow chart or some prose describing a finite state machine that deals with each contingency. This would hopefully provide some indication of the applicant's ability to reason through a logic problem of sorts and articulate an approach for solving. Areas Seeking Feedback Is it unreasonable to ask this of applicants? If not, is it better to request before or after a phone screen? Are these questions too hard or easy for a collegiate audience? Any suggestions for alternate questions? Do these seem like good tools for analyzing people who would part of a software development life cycle? Programming language suggestions - I'm thinking Java, Python and/or C# (we're actually a ColdFusion shop).

    Read the article

  • Interaction of a GUI-based App and Windows Service

    - by psubsee2003
    I am working on personal project that will be designed to help manage my media library, specifically recordings created by Windows Media Center. So I am going to have the following parts to this application: A Windows Service that monitors the recording folder. Once a new recording is completed that meets specific criteria, it will call several 3rd party CLI Applications to remove the commercials and re-encode the video into a more hard-drive friendly format. A controller GUI to be able to modify settings of the service, specifically add new shows to watch for, and to modify parameters for the CLI Applications A standalone (GUI-based) desktop application that can perform many of the same functions as the windows service, expect manually on specific files instead of automatically based on specific criteria. (It should be mentioned that I have limited experience with an application of this complexity, and I have absolutely zero experience with Windows Services) Since the 1st and 3rd bullet share similar functionality, my design plan is to pull the common functionality into a separate library shared by both parts applications, but these 2 components do not need to interact otherwise. The 2nd and 3rd bullets seem to share some common functionality, both will have a GUI, both will have to help define similar parameters (one to send to the service and the other to send directly to the CLI applications), so I can see some advantage to combining them into the same application. On the other hand, the standalone application (bullet #3) really does not need to interact with the service at all, except for possibly sharing a few common default parameters that can easily be put into an XML in a common location, so it seems to make more sense to just keep everything separate. The controller GUI (2nd bullet) is where I am stuck at the moment. Do I just roll this functionality (allow for user interaction with the service to update settings and criteria) into the standalone application? Or would it be a better design decision to keep them separate? Specifically, I'm worried about adding the complexity of communicating with the Windows Service to the standalone application when it doesn't need it. Is WCF the right approach to allow the controller GUI to interact with the Windows Service? Or is there a better alternative? At the moment, I don't envision a need for a significant amount of interaction, maybe just adding a new task once in a while and occasionally tweaking a parameter, but when something is changed, I do expect the windows service to immediately use the new settings.

    Read the article

  • 5.1 surround sound

    - by rocker9455
    Ok, So i've always had trouble with enabling 5.1 in ubuntu. Running 'alsamixer': I have: Master, Heaphones, PCM, Front, Front Mi, Front Mi, Surround, Center All are at 100% Card:HDA Intel Chip:Realtek ALC888 (This is my onboard sound, Its a dell studio, with 7.1 integrated sound) Running "speaker-test -c6 -twav" I only get the front 2 speakers (Right/Left) making any noise. The others make no noise at all. I have no other sound card to use as all my PCI slots are used up. Daemon.conf: ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10

    Read the article

  • problems programmatically creating UIView on iPad App

    - by user3871
    I have been struggling with this problem for a few days. My iPad app is designed to be a portrait game. To satisfy Apple's expection, I also support landscape mode. When it goes into landscape mode, the game goes into a letterbox format with back borders on the sides. My problem is I am creating the UIWindow and UIView programmatically. For some unkown reason, the touch controls are "locked" in to think I'm always in landscape mode. And even though visually in portrait mode everything looks correct, the top and bottom of the screen does not respond to touch. To summarize how I am setting this up, let me provide the skeletal framework of what I'm doing: in main.cpp: int retVal = UIApplicationMain(argc, argv, nil, @"derbyPoker_ipadAppDelegate"); In the delegate, I am doing this: - (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions { CGRect screenBounds = [[UIScreen mainScreen] bounds]; CGFloat scale = [[ UIScreen mainScreen] scale ]; m_device_width = screenBounds.size.width; m_device_height = screenBounds.size.height; m_device_scale = scale; // Everything is built assuming 640x960 window = [[ UIWindow alloc ] initWithFrame:[[UIScreen mainScreen] bounds]]; viewController = [ glView new ]; [self doStateChange:[blitz class]]; return YES; } The last bit of code sets up the UIView... - (void) doStateChange: (Class) state{ viewController.view = [[state alloc] initWithFrame:CGRectMake(0, 0, m_device_width, m_device_height) andManager:self]; viewController.view.contentMode = UIViewContentModeScaleAspectFit; viewController.view.autoresizesSubviews = YES; [window addSubview:viewController.view]; [window makeKeyAndVisible]; } The problem seems to related to the line viewController.view.contentMode = UIViewContentModeScaleAspectFit; If I remove that line, touch works correctly in portrait mode. But the negative is when I'm landscape mode, the game stretches incorrectly. So That's not a option. The frustrating thing is, when I originally had this set up with a NIB file, it worked fine. I have read through the docs about UIWindow, UIViewController and UIView and have tried about everything to no avail. Any help would be greatly appreciated.

    Read the article

  • Solaris 11

    - by user9154181
    Oracle has a strict policy about not discussing product features until they appear in shipping product. Now that Solaris 11 is publically available, it is time to catch up. I will be shortly posting articles on a variety of new developments in the Solaris linkers and related bits: 64-bit Archives After 40+ years of Unix, the archive file format has run out of room. The ar and link-editor (ld) commands have been enhanced to allow archives to grow past their previous 32-bit limits. Guidance The link-editor is now willing and able to tell you how to alter your link lines in order to build better objects. Stub Objects This is one of the bigger projects I've undertaken since joining the Solaris group. Stub objects are shared objects, built entirely from mapfiles, that supply the same linking interface as the real object, while containing no code or data. You can link to them, but cannot use them at runtime. It was pretty simple to add this ability to the link-editor, but the changes to the OSnet in order to apply them to building Solaris were massive. I discuss how we came to invent stub objects, how we apply them to build the OSnet in a more parallel and scalable manner, and about the follow on opportunities that have emerged from the new stub proto area we created to hold them. The elffile Utility A new standard Solaris utility, elffile is a variant of the file utility, focused exclusively on linker related files. elffile is of particular value for examining archives, as it allows you to find out what is inside them without having to first extract the archive members into temporary files. This release has been a long time coming. I joined the Solaris group in late 2005, and this will be my first FCS. From a user perspective, Solaris 11 is probably the biggest change to Solaris since Solaris 2.0. Solaris 11 polishes the ground breaking features from Solaris 10 (DTrace, FMA, ZFS, Zones), and uses them to add a powerful new packaging system, numerous other enhacements and features, along with a huge modernization effort. I'm excited to see it go out into the world. I hope you enjoy using it as much as we did creating it. Software is never done. On to the next one...

    Read the article

  • Partial Shader Signatures HLSL D3D11 C++

    - by ThePhD
    I had been debugging a problem I was having in a single shader file with 2 functions in it. I'm using DirectX 11, vs_5_0 and ps_5_0. I have stripped it down to its basic components to understand what was going wrong with the shaders, because the different named components of the Pixel and Vertex shaders were swapping the data being input: void QuadVertex ( inout float4 position : SV_Position, inout float4 color : COLOR0, inout float2 tex : TEXCOORD0 ) { // ViewProject is a 4x4 matrix, // just included here to show the simple passthrough of the data position = mul(position, ViewProjection); } And a Pixel Shader: float4 QuadPixel ( float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { // Color is filled with position data and tex is // filled with color values from the Vertex Shader return color; } The ID3D11InputLayout and associated C++ code correctly compiles the shaders and sets them up with some simple primitive data: data[0].Position.x = 0.0f * 210; data[0].Position.y = 1.0f * 160; data[0].Position.z = 0.0f; data[1].Position.x = 0.0f * 210; data[1].Position.y = 0.0f * 160; data[1].Position.z = 0.0f; data[2].Position.x = 1.0f * 210; data[2].Position.y = 1.0f * 160; data[2].Position.z = 0.0f; data[0].Colour = Colors::Red; data[1].Colour = Colors::Red; data[2].Colour = Colors::Red; data[0].Texture = Vector2::Zero; data[1].Texture = Vector2::Zero; data[2].Texture = Vector2::Zero; When used with the shader, the float4 color always ended up with the position data, and the float2 tex always ended up with the color data. After a moment, I figured out that the shader's input and output signatures needed to be in the correct order and the correct format and be laid out in the exact order of the output from the Vertex Shader, regardless of the semantics: float4 QuadPixel ( float4 pos : SV_Position, float4 color : COLOR0, float2 tex : TEXCOORD0 ) : SV_Target0 { return color; } After finding this out, My question is: Why don't the semantics map the appropriate components when going from Vertex Shader to Pixel Shader? Is there any way that I can make it so certain semantics are always mapped to other semantics, or do I always have to follow the rigid Shader Signature (in this case, Position, Color, and Texture) ? As a side note for why I'm asking: I know that when using XNA, my shader signatures for functions could differ in position and even drop items from Vertex Shader to Pixel Shader function parameters, having only the COLOR0 and TEXCOORD0 components being used (and it would still match up correctly). However, I also know that XNA relied on DX9 (and maybe a little DX10) implementation, and that maybe this kind of flexibility no longer exists in DX11?

    Read the article

  • Ray Picking Problems

    - by A Name I Haven't Decided On
    I've read so many answers on here about how to do Ray Picking, that I thought I had the idea of it down. But when I try to implement it in my game, I get garbage. I'm working with LWJGL. Here's the code: public static Ray getPick(int mouseX, int mouseY){ glPushMatrix(); //Setting up the Mouse Clip Vector4f mouseClip = new Vector4f((float)mouseX * 2 / 960f - 1, 1 - (float)mouseY * 2 / 640f ,0 ,1); //Loading Matrices FloatBuffer modMatrix = BufferUtils.createFloatBuffer(16); FloatBuffer projMatrix = BufferUtils.createFloatBuffer(16); glGetFloat(GL_MODELVIEW_MATRIX, modMatrix); glGetFloat(GL_PROJECTION_MATRIX, projMatrix); //Assigning Matrices Matrix4f proj = new Matrix4f(); Matrix4f model = new Matrix4f(); model.load(modMatrix); proj.load(projMatrix); //Multiplying the Projection Matrix by the Model View Matrix Matrix4f tempView = new Matrix4f(); Matrix4f.mul(proj, model, tempView); tempView.invert(); //Getting the Camera Position in World Space. The 4th Column of the Model View Matrix. model.invert(); Point cameraPos = new Point(model.m30, model.m31, model.m32); //Theoretically getting the vector the Picking Ray goes Vector4f rayVector = new Vector4f(); Matrix4f.transform(tempView, mouseClip, rayVector); rayVector.translate((float)-cameraPos.getX(),(float) -cameraPos.getY(),(float) -cameraPos.getZ(), 0f); rayVector.normalise(); glPopMatrix(); //This Basically Spits out a value that changes as the Camera moves. //When the Mouse moves, the values change around 0.001 points from screen edge to edge. System.out.format("Vector: %f %f %f%n", rayVector.x, rayVector.y, rayVector.z); //return new Ray(cameraPos, rayVector); return null; } I don't really know why this isn't working. I was hoping some more experienced eyes might be able to help me out. I can get the camera position like a champ, it's the vector the rays going in that I can't seem to get right. Thanks.

    Read the article

  • Multi-Part Map Troubleshooting

    - by Michael Stephenson
    Scenario I came across a nice little one with multi-part maps the other day. I had an orchestration where I needed to combine 4 input messages into one output message like in the below table:   Input Messages Output Messages Company Details Member Details Event Message Member Search Member Import   I thought my orchestration was working fine but for some reason when I was trying to send my message it had no content under the root node like below <ns0:ImportMemberChange xmlns:ns0="http://---------------/"></ns0:ImportMemberChange>   My map is displayed in the below picture. I knew that the member search message may not have any elements under it but its root element would always exist. The rest of the messages were expected to be fully populated. I tried a number of different things and testing my map outside of the orchestration it always worked fine. The Eureka Moment The eureka moment came when I was looking at the xslt produced by the map. Even though I'd tried swapping the order of the messages in the input of the map you can see in the below picture that the first part of the processing of the message (with the red circle around it) is doing a for-each over the GetCompanyDetailsResult element within the GetCompanyDetailsResponse message. This is because the processing is driven by the output message format and the first element to output is the OrganisationID which comes from the GetCompanyDetailsResponse message. At this point I could focus my attention on this message as the xslt shows that if this xpath statement doesn’t return the an element from the GetCompanyDetailsResponse message then the whole body of the output message will not be produced and the output from the map would look like the message I was getting. <ns0:ImportMemberChange xmlns:ns0="http://---------------/"></ns0:ImportMemberChange> I was quickly able to prove this in my map test which proved this was a likely candidate for the problem. I revisited the orchestration focusing on the creation of the GetCompanyDetailsResponse message and there was actually a bug in the orchestration which resulted in the message being incorrectly created, once this was fixed everything worked as expected. Conclusion Originally I thought it was a problem with the map itself, and looking online there wasn’t really much in the way of content around troubleshooting for multi-part map problems so I thought I'd write this up. I guess technically it isn't a multi-part map problem, but I spend a good couple of hours the other day thinking it was.

    Read the article

  • LINQ to Twitter v2.1.09 Released

    - by Joe Mayo
    Originally posted on: http://geekswithblogs.net/WinAZ/archive/2013/10/15/linq-to-twitter-v2.1.09-released.aspxToday, I released LINQ to Twitter v2.1.09. Here are important new changes. Bug Fixes This is primarily a bug fix release. Most notably, there were authentication problems in WinRT apps. This is now fixed. New Features One new feature is the addition of ApplicationOnlyAuthentication for WinRT. It is fully async.  Here’s how it works: var auth = new WinRtApplicationOnlyAuthorizer { Credentials = new InMemoryCredentials { ConsumerKey = "", ConsumerSecret = "" } }; if (auth == null || !auth.IsAuthorized) { await auth.AuthorizeAsync(); } var twitterCtx = new TwitterContext(auth); (from search in twitterCtx.Search where search.Type == SearchType.Search && search.Query == SearchTextBox.Text select search) .MaterializedAsyncCallback( async response => await Dispatcher.RunAsync( CoreDispatcherPriority.Normal, async () => { Search searchResponse = response.State.Single(); string message = string.Format( "Search returned {0} statuses", searchResponse.Statuses.Count); await new MessageDialog(message, "Search Complete").ShowAsync(); })); It’s called the WinRtApplicationOnlyAuthorizer. You only need two tokens, ConsumerKey and ConsumerSecret, which come from your Twitter API application settings page. Note: You need a Twitter Application, which you can create at https://dev.twitter.com/. The MaterializedAsyncCallback materializes your query and handles the response. I put everything together in a lambda for demonstration purposes, but you can always replace the callback with a handler of type Action<TwitterAsyncResponse<IEnumerable<T>>>, where T is Search for this example. On the Horizon The next version of LINQ to Twitter is in development. I discussed it at LINQ to Twitter Async. This isn’t complete, but you can download the source code at the LINQ to Twitter site on CodePlex. I’ve competed all the spikes for what I thought would be the hard parts and now have prototypes of queries and commands working. This would be a good time to provide feedback if there are features in the current version that you think could be improved. The current driving forces for the next version will be async and PCL.   @JoeMayo

    Read the article

  • Our own Daily WTF

    - by Dennis Vroegop
    Originally posted on: http://geekswithblogs.net/dvroegop/archive/2014/08/20/our-own-daily-wtf.aspxIf you're a developer, you've probably heard of the website the DailyWTF. If you haven't, head on over to http://www.thedailywtf.com and read. And laugh. I'll wait. Read it? Good. If you're a bit like me probably you've been wondering how on earth some people ever get hired as a software engineer. Most of the stories there seem to weird to be true: no developer would write software like that right? And then you run into a little nugget of code one of your co-workers wrote. And then you realize: "Hey, it happens everywhere!" Look at this piece of art I found in our codebase recently: public static decimal ToDecimal(this string input) {     System.Globalization.CultureInfo cultureInfo = System.Globalization.CultureInfo.InstalledUICulture;     var numberFormatInfo = (System.Globalization.NumberFormatInfo)cultureInfo.NumberFormat.Clone();     int dotIndex = input.IndexOf(".");     int commaIndex = input.IndexOf(",");     if (dotIndex > commaIndex)         numberFormatInfo.NumberDecimalSeparator = ".";     else if (commaIndex > dotIndex)         numberFormatInfo.NumberDecimalSeparator = ",";     decimal result;     if (decimal.TryParse(input, System.Globalization.NumberStyles.Float, numberFormatInfo, out result))         return result;     else         throw new Exception(string.Format("Invalid input for decimal parsing: {0}. Decimal separator: {1}.", input, numberFormatInfo.NumberDecimalSeparator)); }  Me and a collegue have been looking long and hard at this and what we concluded was the following: Apparently, we don't trust our users to be able to correctly set the culture in Windows. Users aren't able to determine if they should tell Windows to use a decimal point or a comma to display numbers. So what we've done here is make sure that whatever the user enters, we'll translate that into whatever the user WANTS to enter instead of what he actually did. So if you set your locale to US, since you're a US citizen, but you want to enter the number 12.34 in the Dutch style (because, you know, the Dutch are way cooler with numbers) so you enter 12,34 we will understand this and respect your wishes! Of course, if you change your mind and in the next input field you decide to use the decimal dot again, that's fine with us as well. We will do the hard work. Now, I am all for smart software. Software that can handle all sorts of input the user can think of. But this feels a little uhm, I don't know.. wrong.. Or am I too old fashioned?

    Read the article

  • core.* files eating up server space (~50MB)

    - by skytreader
    I'm renting server space from someone and, upon logging in my control panel after quite sometime, noticed an abnormal spike (~50MB) in the disk usage. Upon investigating, I found a lot of core.* files scattered around my public_html directory. Each one is more than 5MB in size but no more than 6MB. The * part is all numbers (in programming regex, that should be core\.\d+). I downloaded one and checked the contents. There was a lot of balderdash characters (NUL mostly, but also a scattering of ETB, ETX, STX) but there's this block of readable text which says: This text is part of the internal format of your mail folder, and is not a real message. It is created automatically by the mail system software. If deleted, important folder data will be lost, and it will be re-created with the data reset to initial values. Pretty self-explanatory. A few blocks above the text are some more readable messages that look like logs but is sandwiched in between non printable characters. I've extracted some below. Scan not valid for mh mailboxes Bogus character 0x%x in news state Can't rewrite news state %.80s Error closing backup news state %.80s No state for newsgroup %.80s found Now, a few concerns: Am I under attack? The messages seem to be about my webmail but I don't use my personal webmail that much---only for a vanity email address and an inbox for an outdated comments system. However, lately, I seem to notice a spike in the spam for my vanity mail. (Note: the comments system is covered by a captcha but every now and then some get through. My vanity email has a spam filter but it isn't as good as I'd like). Next, if this is a feature, can I turn it off? Is it advisable to? I've only 150MB so you see why I'm fretting over a 50MB spike. Some final details: my only server-side scripts are in PHP. The directory which accumulated the most number of these core files is the one containing the Wordpress-managed subdomain of my site. I manage my server through CPanel. Lastly, I decided to delete this files and after some checking nothing seems amiss in my websites nor in my mail. They are indeed the ones responsible for the ~50MB spike as my disk space usage is back to expected.

    Read the article

< Previous Page | 380 381 382 383 384 385 386 387 388 389 390 391  | Next Page >