Search Results

Search found 29820 results on 1193 pages for 'default implementation'.

Page 266/1193 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • How to find out Vim's currently mapped commandos

    - by Boldewyn
    I'm using Vim under Debian, Win Vista and WinXP (the latter two with Cygwin). To handle tabs more easily, I mapped <C-Left> and <C-Right> to :tab(prev|next). This mapping works like a charm on the Debian machine. On the Windows machines, however, pressing <C-Left> deletes 5 lines, as far as I can tell, and meddles with cursor position, while <C-Right> does this, too, and additionally enters Insert mode. Question: To put it in a nutshell, how can I find out, why Vim behaves as it does? Is there a way to backtrace the active commands and keystrokes? Could there be a plugin the culprit? (I didn't install one, perhaps a default include by the Cygwin distro...) If so, how can I find it? Additional Diagnosis: This behaviour occurs regardless of any existing ~/.vimrc file (is therefore not related to my above mentioned mapings) and is not inherited of some /etc/vim/vimrc, since this doesn't exist in the default Cygwin installation. :verbose map doesn't yield any new insights. Either nothing or my mentioned mappings appear, based on the existence of the .vimrc file :help <C-Left> suggests, that the default would be a simple cursor movement, which is apparently not the case. Vim's version under Cygwin: VIM - Vi IMproved 7.2 (2008 Aug 9, compiled Feb 11 2010 17:36:58) Included patches: 1-264 Compiled by http://cygwin.com/ Huge version without GUI. Features included (+) or not (-): +arabic +autocmd -balloon_eval -browse ++builtin_terms +byte_offset +cindent -clientserver -clipboard +cmdline_compl +cmdline_hist +cmdline_info +comments +cryptv +cscope +cursorshape +dialog_con +diff +digraphs -dnd -ebcdic +emacs_tags +eval +ex_extra +extra_search +farsi +file_in_path +find_in_path +float +folding -footer +fork() -gettext -hangul_input +iconv +insert_expand +jumplist +keymap +langmap +libcall +linebreak +lispindent +listcmds +localmap +menu +mksession +modify_fname +mouse -mouseshape +mouse_dec -mouse_gpm -mouse_jsbterm +mouse_netterm -mouse_sysmouse +mouse_xterm +multi_byte +multi_lang -mzscheme -netbeans_intg -osfiletype +path_extra -perl +postscript +printer +profile -python +quickfix +reltime +rightleft -ruby +scrollbind +signs +smartindent -sniff +statusline -sun_workshop +syntax +tag_binary +tag_old_static -tag_any_white -tcl +terminfo +termresponse +textobjects +title -toolbar +user_commands +vertsplit +virtualedit +visual +visualextra +viminfo +vreplace +wildignore +wildmenu +windows +writebackup -X11 -xfontset -xim -xsmp -xterm_clipboard -xterm_save system vimrc file: "$VIM/vimrc" user vimrc file: "$HOME/.vimrc" user exrc file: "$HOME/.exrc" fall-back for $VIM: "/usr/share/vim" Compilation: gcc -c -I. -Iproto -DHAVE_CONFIG_H -g -O2 -D_FORTIFY_SOURCE=1 Linking: gcc -L/usr/local/lib -o vim.exe -lm -lncurses -liconv

    Read the article

  • What's wrong with my ext4 partition?

    - by bumbling fool
    What is wrong with this picture? Top is output from "df -h", bottom is gparted. I suspect I'm missing a lot of free space. No problems other than that (yet). Can somebody suggest the best (non-destructive) way to correct this? sudo dumpe2fs -h /dev/sda3: (source http://pastebin.com/nAvrdT4E) Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 9f6eff64-60d7-4eec-81d5-1e8acd818b38 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 1602496 Block count: 6406144 Reserved block count: 320306 Free blocks: 4842284 Free inodes: 1361222 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 1022 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8176 Inode blocks per group: 511 RAID stride: 32692 Flex block group size: 16 Filesystem created: Sun Nov 8 18:18:13 2009 Last mount time: Tue Mar 1 01:04:27 2011 Last write time: Mon Feb 28 04:27:34 2011 Mount count: 16 Maximum mount count: 28 Last checked: Thu Feb 24 06:23:39 2011 Check interval: 15552000 (6 months) Next check after: Tue Aug 23 07:23:39 2011 Lifetime writes: 227 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 268015 Default directory hash: half_md4 Directory Hash Seed: cc101517-e617-482b-a883-a72919419c84 Journal backup: inode blocks Journal features: journal_incompat_revoke Journal size: 128M Journal length: 32768 Journal sequence: 0x001d3000 Journal start: 7787 fdisk and parted output per requests: http://pastebin.com/EGVH7Ken

    Read the article

  • Ubuntu 12.04 connected to wireless network but internet not working

    - by A.J.
    I can connect to my house's wireless network just fine, but when I'm connected I can't browse the web. Firefox starts connecting to a site and then just poops out. This doesn't happen on my roommates' computers (running Windows) or on our 3DSes, so I know it's just my laptop. I already tried sudo dhclient -r sudo dhclient sudo ifconfig eth0 down sudo ifconfig eth0 up Results of a few commands I was asked to run in comments: ping -c 2 4.2.2.2 PING 4.2.2.2 (4.2.2.2) 56(84) bytes of data. ^C --- 4.2.2.2 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms ping -c 2 google.com PING google.com (173.194.33.38) 56(84) bytes of data. --- google.com ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1006ms nm-tool NetworkManager Tool State: connected (global) - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 88:AE:1D:6B:4E:E7 Capabilities: Carrier Detect: yes Speed: 100 Mb/s Wired Properties Carrier: off - Device: wlan0 [JUSTICE] ----------------------------------------------------- Type: 802.11 WiFi Driver: ath9k State: connected Default: yes HW Address: 1C:65:9D:65:C6:31 Capabilities: Speed: 1 Mb/s Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points (* = current AP) HOME-9B18: Infra, 00:26:F3:53:9B:18, Freq 2412 MHz, Rate 54 Mb/s, Strength 34 WPA WPA2 cougdad48 Network: Infra, 60:33:4B:E4:C4:5D, Freq 2437 MHz, Rate 54 Mb/s, Strength 22 WPA2 cougdad48 Guest Network: Infra, 66:33:4B:E4:C4:5D, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 WPA2 belkin.ade: Infra, 94:44:52:FF:8A:DE, Freq 2457 MHz, Rate 54 Mb/s, Strength 20 WPA WPA2 *JUSTICE: Infra, 00:24:01:7B:9F:7E, Freq 2462 MHz, Rate 54 Mb/s, Strength 88 WEP CenturyLink: Infra, B2:B2:DC:8E:E2:58, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA WPA2 IPv4 Settings: Address: 192.168.0.11 Prefix: 24 (255.255.255.0) Gateway: 192.168.0.1 DNS: 192.168.0.1 (JUSTICE is my home's network.) ping -c 2 198.168.0.1 PING 198.168.0.1 (198.168.0.1) 56(84) bytes of data. --- 198.168.0.1 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 1007ms

    Read the article

  • The technology behind 22can's curiosity

    - by Cameron Scully
    I don't have alot of experience with mobile apps and I definitely don't know much about MMO's but I was wondering what the basic architecture of a game like that would be (understandably some don't consider it a game, but it must use some game theory and implementation). Mainly, how are they able to send/recieve real time feed back of the cube being chipped away by thousands of players on their mobile devices? How is data of the cube's millions of pieces stored and accessed so quickly? Thanks

    Read the article

  • Do I need to uninstall lxde before installing kde-standard?

    - by A Roy
    I have ubuntu 12.04 (upgraded from 10.04) and since I disliked the default desktop, I installed lxde (sudo apt-get install lxde). This was good except that occasionally there would be trouble with Firefox (blinking on panel) so that finally I had to close it and then a error message from Ubuntu was issued. I had asked about it before but there was no useful response so now I want to move to another desktop which will hopefully not create the problem I have now. My doubt is, should I first uninstall lxde and then install kde (sudo apt-get install kde-standard) or is it enough to install kde without uninstalling lxde? In case it is necessary to uninstall, should I use the command sudo apt-get remove lxde or is there a better command for it? You may also help me with choice of desktop. I installed lxde since this is simple and lightweight. I am assuming that kde will not be as simple but hopefully not create problem like above. But I hate if it takes too long to log in or to launch a program like Firefox etc and also there should not be icons fixed on the left part of terminal (I hate to keep icons on desktop since these are distracting). Some of these issues were present with default Ubuntu 12.04. So is my choice of kde-standard appropriate or are there better desktop alternatives?

    Read the article

  • Exchange 2007 Email Address Policies

    - by Ryan Migita
    We have recently upgraded to Exchange 2007 (from 2003) and have noticed the change from recipient policies to email address policies. We have two separate domains (let's call them domaina.com and domainb.com) we receive email for, have email address policies and both email address policies are not applied. In our Exchange 2003 environment, domaina.com was the default email address when we created new mailboxes and due to the migration, domainb is the default (and its email address policy is a higher priority). Now, when we create a new mailbox (or edit existing ones), the primary email address becomes domainb.com. Now the question is, is this as simple as putting the email address policies in the correct order? Do I have to apply both policies? What effect will the above changes make to existing mailboxes? Since we do not have any conditions set on the policies, I assume prior to making these changes, I should force all domainb mailboxes to not automatically update email address based on policy? Thanks in advance!

    Read the article

  • why do we need to put private members in headers

    - by Simon
    Private variables are a way to hide complexity and implementation details to the user of a class. This is a rather nice feature. But I do not understand why in c++ we need to put them in the header of a class. I see some annoying downsides to this: it clutters the header from the user it force recompilation of all client libraries whenever the internals are modified Is there a conceptual reason behind this requirement? Is it only to ease the work of the compiler?

    Read the article

  • SNMPD timeout yet netcat shows port as open

    - by Kirill Gordeenko
    SNMPD config (I have this config working on a different server): com2sec readonly default public group MyROGroup v1 readonly group MyROGroup v2c readonly group MyROGroup usm readonly view all included .1 80 access MyROGroup "" any noauth exact all none none syslocation <LOCATION> syscontact <CONTACT> When I check the port from remote machine: » nc -zvu xx.xx.xx.xx 161 Connection to xx.xx.xx.xx 161 port [udp/snmp] succeeded! This also works locally (I get all the right stats): snmpwalk -v 2c -c public localhost Yet when I try same command locally or remotely with external IP: Timeout: No Response from xx.xx.xx.xx IPTables are disabled on both machines. /etc/sysconfig/snmpd looks like this: OPTIONS="-Lsd -Lf /dev/null -p /var/run/snmpd.pid" -a /etc/default/snmpd is empty.

    Read the article

  • Apt-Get "sources.list needs at least 1 source-URI" error when building dependecies for Xen

    - by Entity_Razer
    What i'm trying to do is install Xen in a test environment, now I am trying to run the: apt-get build-dep xen-3.3 command, but it keep throwing a error which literally translated from dutch (installed the debian OS in Dutch) say's: E: your sourcelist (/etc/apt/sources.list) has to contain at least 1 source-URI I've googled it but I can't seem to find a definitive solid answer on how to fix this. By default a source-URI (read man page of apt-get) states it needs to be something along the lines of deb ftp://ftp.debian.org/debian stable contrib Now I've got 2 HTTP sources (default Debian ones) up & running so far and they've been working flawlessly for the better part of a few days now. Only now its starting to act up. Anyone able to help me out ? Much obliged !

    Read the article

  • Mock RequireJS define dependencies with config.map

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/08/18/mock-requirejs-define-dependencies-with-config.map.aspxI had a module dependency, that I’m pulling down with RequireJS that I needed to use and write tests against. In this case, I don’t care about the actual implementation of the module (it’s simple enough that I’m just avoiding some AJAX calls). EDIT: make sure you look at the bottom example after the edit before using the config.map approach. I found that there is an easier way. I did not want to change the constructor of the consumer as I had a chain of changes that would have to be made and that would have been to invasive for this task. I found a question on StackOverflow with a short, but helpful answer from “Artem Oboturov”. We can use the config.map from RequireJs to achieve this. Here is some code: A module example (“usefulModule” in Common/Modules/usefulModule.js): define([], function() { "use strict"; var testMethod = function() { ... }; // add more functionality of the module return { testMethod; } }); A consumer of usefulModule example: define([ "Commmon/Modules/usefulModule" ], function(usefulModule) { "use strict"; var consumerModule = function(){ var self = this; // add functionality of the module } }); Using config.map in the html of the test runner page (and in your Karma config –> I’m still trying to figure this out): map: {'*': { // replace usefulModule with a mock 'Common/Modules/usefulModule': '/Tests/Specs/Common/usefulModuleMock.js' } } With the new mapping, Require will load usefulModuleMock.js from Tests/Specs/Common instead of the real implementation. Some of the answers on StackOverflow mentioned Squire.js, which looked interesting, but I wasn’t ready to introduce a new library at this time. That’s all you need to be able to mock a depency in RequireJS. However, there are many good cases when you should pass it in through the constructor instead of this approach.   EDIT: After all that, here’s another, probably better way: The consumer class, updated: define([ "Commmon/Modules/usefulModule" ], function(UsefulModule) { "use strict"; var consumerModule = function(){ var self = this; self.usefulModule = new UsefulModule(); // add functionality of the module } }); Jasmine test: define([ "consumerModule", "/UnitTests/Specs/Common/Mocks/usefulModuleMock.js" ], function(consumerModule, UsefulModuleMock){ describe("when mocking out the module", function(){ it("should probably just override the property", function(){ var consumer = new consumerModule(); consumer.usefulModule = new UsefulModuleMock(); }); }); });   Thanks for letting me think out loud :-).

    Read the article

  • Why is my root filesystem always scanned at boot?

    - by luri
    I always have a pause at boot saying my filesystems are being checked (with a "press C to cancel" note, too). Actually (seeing boot.log) I think it's the / fs, which is located at /dev/sdb5 Several questions altoghether, here (hope this does not break any rule): Is this normal? Can I (or even should I) prevent this anyhow? According to boot.log (below) the fs does not seem to be 'clean', or, at least, it's in an state or condition that makes fsck always can it for errors for a while (just a few seconds). How can I fix it? Edit: This is my boot.log: fsck desde util-linux-ng 2.17.2 udevd[515]: can not read '/etc/udev/rules.d/z80_user.rules' /dev/sdb5: 249045/32841728 ficheros (0.3% no contiguos), 20488485/131338752 bloques init: ureadahead-other main process (1111) terminated with status 4 init: ureadahead-other main process (1116) terminated with status 4 Password: * Starting AppArmor profiles [160G Skipping profile in /etc/apparmor.d/disable: usr.bin.firefox [154G[ OK ] * Setting sensors limits [160G [154G[ OK ] And this is dumpe2fs results for the filesystem being checked (well, the relevant part of the log): Filesystem volume name: <none> Last mounted on: / Filesystem UUID: 42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 32841728 Block count: 131338752 Reserved block count: 6566937 Free blocks: 110850356 Free inodes: 32592701 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 992 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Fri Dec 10 19:44:15 2010 Last mount time: Mon Feb 14 17:00:02 2011 Last write time: Mon Feb 14 16:59:45 2011 Mount count: 1 Maximum mount count: 33 Last checked: Mon Feb 14 16:59:45 2011 Check interval: 15552000 (6 months) Next check after: Sat Aug 13 17:59:45 2011 Lifetime writes: 331 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 First orphan inode: 28049496 Default directory hash: half_md4 Directory Hash Seed: d3d24459-514b-4413-b840-e970b766095b Journal backup: inode blocks Journal features: journal_incompat_revoke Tamaño de fichero de transacciones: 128M Journal length: 32768 Journal sequence: 0x0005e0c4 Journal start: 1 This is the relevant (at least I think this is the fs being checked) line in fstab: #Entry for /dev/sdb5 : UUID=42509bf9-f3e6-460a-8947-ec0f5c1fbcc8 / ext4 errors=remount-ro 0 1

    Read the article

  • XNA Moddable Game - Architecture Design and Reflection

    - by David K
    I've decided to embark on an XNA moddable game project of a simple rogue style. For all purposes of this question, I'm going to not be using a scripting engine, but rather allow modders to directly compile assemblies that are loaded by the game at run time. I know about the security problems this may raise. So in order to expose the moddable content, I have gone about creating a generic project in XNA called MyModel. This contains a number of interfaces that all inherit from IPlugin, such as IGameSystem, IRenderingSystem, IHud, IInputSystem etc. Then I've created another project called MyRogueModel. This references MyModel project, and holds interfaces such as IMonster, IPlayer, IDungeonGenerator, IInventorySystem. More rogue specific interfaces, but again, all interfaces in this project inherit from IPlugin. Then finally, I've created another project called MyRogueGame, that references both MyModel and MyRogueModel projects. This project will be the game that you run and play. Here I have put the actual implementation of the Monster, DungeonGenerator, InputSystem and RenderingSystem classes. This project will also scan the mods directory during run time and load any IPlugins it finds using reflection and override anything it finds from the default. For example if it finds a new implementation of the DungeonGenerator it will use that one instead. Now my question is, in order to get this far, I have effectively 2 projects that contain nothing but interfaces... which seems a little... strange ? For people to create mods for the game, I would give them both the MyModel and MyRogueModel assemblies in which they would reference. I'm not sure whether this is the right way to do it, but my reasoning goes as follows : If I write 1 input system, I can use it in any game I write. If I create 3 rogue like games, and a modder writes 1 rendering system, that modder could use the rendering system for all 3 games, because it all comes from the MyModel project. I come from a more web based C# role, so having empty interface projects doesn't seem wrong, its just something I haven't done before. Before I embark on something that might be crazy, I'd just like to know whether this is a foolish idea and whether there's a better (or established) design principle I should be following ?

    Read the article

  • Converting raw data type to enumerated type

    - by Jim Lahman
    There are times when an enumerated type is preferred over using the raw data type.  An example of using a scheme is when we need to check the health of x-ray gauges in use on a production line.  Rather than using a scheme like 0, 1 and 2, we can use an enumerated type: 1: /// <summary> 2: /// POR Healthy status indicator 3: /// </summary> 4: /// <remarks>The healthy status is for each POR x-ray gauge; each has its own status.</remarks> 5: [Flags] 6: public enum POR_HEALTH : short 7: { 8: /// <summary> 9: /// POR1 healthy status indicator 10: /// </summary> 11: POR1 = 0, 12: /// <summary> 13: /// POR2 healthy status indicator 14: /// </summary> 15: POR2 = 1, 16: /// <summary> 17: /// Both POR1 and POR2 healthy status indicator 18: /// </summary> 19: BOTH = 2 20: } By using the [Flags] attribute, we are treating the enumerated type as a bit mask.  We can then use bitwise operations such as AND, OR, NOT etc. . Now, when we want to check the health of a specific gauge, we would rather use the name of the gauge than the numeric identity; it makes for better reading and programming practice. To translate the numeric identity to the enumerated value, we use the Parse method of Enum class: POR_HEALTH GaugeHealth = (POR_HEALTH) Enum.Parse(typeof(POR_HEALTH), XrayMsg.Gauge_ID.ToString()); The Parse method creates an instance of the enumerated type.  Now, we can use the name of the gauge rather than the numeric identity: 1: if (GaugeHealth == POR_HEALTH.POR1 || GaugeHealth == POR_HEALTH.BOTH) 2: { 3: XrayHealthyTag.Name = Properties.Settings.Default.POR1XRayHealthyTag; 4: } 5: else if (GaugeHealth == POR_HEALTH.POR2) 6: { 7: XrayHealthyTag.Name = Properties.Settings.Default.POR2XRayHealthyTag; 8: }

    Read the article

  • Auditing database source code changes

    - by John Paul Cook
    Auditing changes to database source code can be easily implemented with a database trigger. Here’s a simple implementation of stored procedure auditing using an audit table and a database trigger. It assumes that a schema named Audit already exists. CREATE TABLE Audit . AuditStoredProcedures ( DatabaseName sysname , ObjectName sysname , LoginName sysname , ChangeDate datetime , EventType sysname , EventDataXml xml ); Notice the EventDataXml column. Using an nvarchar column to store the source text...(read more)

    Read the article

  • eAccelerator settings for PHP/Centos/Apache

    - by bobbyh
    I have eAccelerator installed on a server running Wordpress using PHP/Apache on CentOS. I am occassionally getting persistent "white pages", which presumably are PHP Fatal Errors (although these errors don't appear in my error_log). These "white pages" are sprinkled here and there throughout the site. They persist until I go to my eAccelerator control.php page and clear/clean/purge my caches, which suggests to me that I've configured eAccelerator improperly. Here are my current /etc/php.ini settings: memory_limit = 128M; eaccelerator.shm_size="64", where shm.size is "the amount of shared memory eAccelerator should allocate to cache PHP scripts" (see http://eaccelerator.net/wiki/Settings) eaccelerator.shm_max="0", where shm_max is "the maximum size a user can put in shared memory with functions like eaccelerator_put ... The default value is "0" which disables the limit" eaccelerator.shm_ttl="0" - "When eAccelerator doesn't have enough free shared memory to cache a new script it will remove all scripts from shared memory cache that haven't been accessed in at least shm_ttl seconds. By default this value is set to "0" which means that eAccelerator won't try to remove any old scripts from shared memory." eaccelerator.shm_prune_period="0" - "When eAccelerator doesn't have enough free shared memory to cache a script it tries to remove old scripts if the previous try was made more then "shm_prune_period" seconds ago. Default value is "0" which means that eAccelerator won't try to remove any old script from shared memory." eaccelerator.keys = "shm_only" - "These settings control the places eAccelerator may cache user content. ... 'shm_only' cache[s] data in shared memory" On my phpinfo page, it says: memory_limit 128M Version 0.9.5.3 and Caching Enabled true On my eAccelerator control.php page, it says 64 MB of total RAM available Memory usage 77.70% (49.73MB/ 64.00MB) 27.6 MB is used by cached scripts in the PHP opcode cache (I added up the file sizes myself) 22.1 MB is used by the cache keys, which is populated by the Wordpress object cache. My questions are: Is it true that there is only 36.4 MB of room in the eAccelerator cache for total "cache keys" (64 MB of total RAM minus whatever is taken by cached scripts, which is 27.6 MB at the moment)? What happens if my app tries to write more than 22.1 MB of cache keys to the eAccelerator memory cache? Does this cause eAccelerator to go crazy, like I've seen? If I change eaccelerator.shm_max to be equal to (say) 32 MB, would that avoid this problem? Do I also need to change shm_ttl and shm_prune_period to make eAccelerator respect the MB limit set by shm_max? Thanks! :-)

    Read the article

  • How dependecy injection increases coupling?

    - by B?????
    Reading wiki page on Dependency injection, the disadvantages section tells this : Dependency injection increases coupling by requiring the user of a subsystem to provide for the needs of that subsystem. with a link to an article against DI. What DI does is that it makes a class use the interface instead of the concrete implementation. That should be decreased coupling, no? So, what am I missing? How is dependency injection increasing coupling between classes?

    Read the article

  • Mise à jour Partner Enablement Oracle University (novembre)

    - by swalker
    Executive overview of Oracle Fusion Applications in 1-day from your desktop Designed from the ground up using the latest technology advances and incorporating the best practices gathered from Oracle's thousands of customers, Oracle Fusion Applications are 100% open-standards-based business applications that set a new standard for the way we innovate, work, and adopt technology. Learn more about them: Oracle University has scheduled a 1–day executive overview as a Live Virtual Class on the following dates: 1 December 2 December Your OPN discount applies to the standard price shown on the website. New In Class and Online dates will be shared on education.oracle.com. Book online or contact your local Oracle University representative for scheduling requests and more information. Deux nouvelles formations intensives OPN Only Boot Camps Les formations OPN Only Boot Camps suivantes viennent d'être mises à disposition : Formation technique intensive de 3 jours Oracle Exadata 11g  : Vous prépare à devenir un Spécialiste certifié de l’implémentation Oracle Exadata 11g Actuellement prévue en Allemagne, au Royaume-Uni Possibilité d'organisation dans tous les pays Dates des classes virtuelles en direct : 15-17 fév. 2012 & 16-18 mai 2012 Formation intensive de 5 jours Oracle BI Enterprise Edition 11g Implementation Actuellement prévue en Suède Possibilité d'organisation dans tous les pays Consulter le calendrier complet des formations OPN Only Boot Camp. Nouveautés du côté des certifications : Java SE 7 Soyez parmi les premiers à obtenir la certification Java SE 7 . Les examens suivants sont depuis peu disponibles en bêta test : Code et intitulé de l'examen Filière de certification 1Z1-805 Upgrade to Java SE 7 Programmer (Bêta jusqu'au 17 déc. 2011) Professionnel certifié Oracle (Certified Professional), Programmeur Java SE 7 1Z1-803 Java SE 7 Programmer I (Bêta jusqu'au 17 déc. 2011) Associé certifié Oracle (Certified Associate), Programmeur Java SE 7 Un examen bêta vous confère deux avantages distincts : vous serez parmi les premiers à obtenir la certification, vous bénéficiez d'un tarif réduit. Les examens bêta peuvent être passés dans n'importe quel Centre de test Pearson VUE. Nouveaux cours Parmi les nouveautés d’Oracle Université de ce mois-ci, vous trouverez : Nouveaux cours - Cliquez ici pour en savoir plus. Vos partenaires souhaitent-ils obtenir le point de vue des experts de l'Oracle University ? Conseillez-leur de consulter les newsletters suivantes de l'Oracle University " href="http://education.oracle.com/pls/web_prod-plq-dad/db_pages.getpage?page_id=289&p_nl=tech" target="_blank">Newsletters Technologie Newsletters Applications Restez connecté à Oracle University : OracleMix Twitter LinkedIn Facebook

    Read the article

  • How to recover from terminal custom setup

    - by linq
    I installed ubuntu 12.04 and I added "Terminal" to the launcher bar, the default size of the terminal is a bit small, after some googling here is how I made the disaster change: At HUD menu, type "preference" and I see the option of Edit > preference, this is as I expected After I clicked the option, I forget the exact steps, but somehow I came to some configuration panel and there is a checkbox for terminal "custom" and I checked it now an input box is enabled and I type gnome-terminal --geometry 160x50 Now the problem comes, whenever I click the terminal button at the launcher bar, new terminal windows pops up endlessly from everywhere, I cannot do anything any more except logout or shutdown. The weird thing is, after I come back, I cannot get that Edit > preference any more when I type "preference" in HUD, I tried "Edit", "preference". To be clear, I still can use the computer as long as I do not click that terminal button, but I really want to use terminal to run commands. Help is appreciated greatly! Update - OK, figured out. Go to /home/jibin/.gconf/apps/gnome-terminal/profiles/Default and open that xml file, I can see the options I changed at the GUI, remove them and terminal runs good. The preference HUD is back also, it is actually Profile Preference and it is only available when terminal is open. Continue my ubuntu-ing...

    Read the article

  • JustCode &ndash; Color Identifier Basics

    Color identifiers make it easier for developers to quickly read and understand the code on screen.  One of the many features provided by JustCode is the ability to colorize additional items that Visual Studio does not allow you to colorize by default.  The colorization of items such as methods, properties, events, variables, and method parameters can easily be tweaked to your specific needs. Enable / Disable Color Identifiers In the recent release, we turned this option on by default.  You can enable/disable code colorization by going to the JustCode menu in Visual Studio, selecting options, then selecting the general section in the left window.  When you scroll down on general setting you will see a check box that says Color Identifiers.  Check or uncheck this box to enable/disable code colorization.    Color identifiers option   Adjust Color Identifiers Adjusting the color identifiers in JustCode is done in the same place ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle is Sponsoring LinuxCon Europe 2012

    - by Zeynep Koch
    Architecture is amazing in Barcelona but you will also be impressed with Oracle Linux sessions in LinuxCon Europe as well.  Oracle is one of the key sponsors in LinuxCon Europe and we have great sessions to show you why Oracle Linux is best for your "IT architecture"! We also have a booth where you can pick up latest Oracle Linux and Oracle VM DVD Kit and Virtualization for Dummies booklet. Don't forget to visit us at technology showcase Booth #19. Oracle Sessions at LinuxCon Europe 2012:  1. OCFS2: Status and Overview - Lenz Grimmer, Oracle Wednesday November 7, 2012 10:40am - 11:25am Venue: Diamant OCFS2, Oracle's general-purpose shared-disk cluster file system for Linux has come a long way since its development started in 2003. Distributed under the GPL and part of the mainline Linux Kernel, it is also included in Oracle Linux and plays a vital role in products like Oracle VM, Oracle RAC or E-Business Suite. This presentation will provide a general technical overview as well as an update on the latest developments. Attendees will learn about the features and improvements that set OCFS2 apart from other Linux-based cluster file systems, including: Heartbeat implementation: global vs. local heartbeats Storage optimizations: Extent-based Allocations, Hole punching, Reflinks 2. Status of Linux Tracing - Elena Zannoni, Oracle Wednesday November 7, 2012 11:35am - 12:20am Venue: Diamant There have been many developments recently in the Linux tracing area. The tracing infrastructure in the kernel is getting more robust, with  the recent introduction of uprobes to allow the implementation of user  space tracing, and new features of perf. There are many tracing tools to choose from, including the newest kid on the block, DTrace for Linux.  This talk will take the audience through the main tracing facilities  available today whether more tightly integrated with the kernel code, or maintained stand alone. 3. MySQL Security Model and Pluggable Authentication - Kristofer Pettersson, Oracle Wednesday November 7, 2012 1:50pm - 2:35pm Venue: Diamant With an increasing security awareness among web and cloud developers, knowing how to secure your database from unauthorized or malicious access has become important. This talk explains the MySQL security model, pluggable authentication, new auditing features and rounds off with some pointers on how to securely integrate your database into your Linux web stack. We look forward to seeing you in Barcelona, Spain on November 5-9, 2012. Register today 

    Read the article

  • Always use one slow connection in preference of a "faster" one

    - by billc.cn
    In Windows, there's this automatic metric thing where the metric is selected according to the declared speed of the link. I now have a gigabit LAN routed to a 2MB DSL service and a HSDPA mobile broadband connection. The former is always chosen for Internet packets even though the latter is actually faster. I tried setting the mobile broadband's interface metric to 1 and raising its priority in the advanced settings of the adapter settings, but this does not seem to affect the metric of the default route. The default route to the Ethernet interface always have a lower metric than the mobile broadband interface. Am I missing something here?

    Read the article

  • How to use TFS as a query tracking system?

    - by deostroll
    We already use tfs for managing defects in code etc, etc. We additionally need a way to "understand the domain & requirements of the products". Normally, without tfs we exchange emails with the consultants and have the questions/queries answered. If it is a feature implementation we sometimes "find" conflicts in the implementation itself. And when that happens the userstory is modified and the enhancement/bug as per that is raised in TFS. Sometimes it is critical we come back to decisions we made or questions we wanted answers to. Hence we need to be able to track how that "requirement idea" or that "query in concern" evolved. Hence how is it that we can use TFS to track all of this? Do we raise an "issue" item for this? Or do we raise a "bug" item? The main things we'd ideally look in a query tracking system are as follows: Area: Can be a module, submodule, domain. Sometimes this may be "General" - to address domain related stuff, or, event more granular to address modules, sub-modules. Take the case for the latter, if we were tracking this in excel sheets, we'd just write module1,submodule2; i.e. in a comma separated fashion. The things I would like here is to be able search for all queries relating to submodule2 sometime in the future. Responses: This is a record of conversations between the consultant and any other stakeholder. For a simple case, it would just be paragraphs. Each para would start with a name and date enclosed in brackets and the response following that...each para would be like a thread - much like a forum thread Action taken: We'd want to know how the query was closed, what was the input given, what were the changes that took place because of that, etc etc. These are fields I think I would need in such a system apart from some obvious ones like status, address to, resovled by, etc. I am open for any other fields which are sort of important. To summarise my question: how can we manage "queries" in the system? Where should we ideally store data pertaining to those three fields I have mentioned above (for e.g. is it wise to store responses in the history tag assuming we are opening a bug for the query)?

    Read the article

  • MySQL keeps crashing OS server.. Please help adjust my.ini!

    - by TruMan1
    I have MySQL 5.0 installed on a Windows 2008 machine (3GB RAM). My server crashes on a regular basis (almost once a day) with this error: Changed limits: max_open_files: 2048 max_connections: 800 table_cache: 619 I did not use the heavy InnoDB .ini file, although I am rethinking that I should have? I am worried that big configuration changes will make my current sites stop working. What should I do? Here is my current ini settings: default-character-set=latin1 default-storage-engine=INNODB max_connections=800 query_cache_size=84M table_cache=1520 tmp_table_size=30M thread_cache_size=38 myisam_max_sort_file_size=100G myisam_sort_buffer_size=30M key_buffer_size=129M read_buffer_size=64K read_rnd_buffer_size=256K sort_buffer_size=256K innodb_additional_mem_pool_size=6M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=3M innodb_buffer_pool_size=250M innodb_log_file_size=50M innodb_thread_concurrency=10

    Read the article

  • CMT Blog: Virtual IO gets better for LDoms

    - by uwes
    As we all know virtual IO is of great use in the IT environments of today but when it comes to performance we often have to pay the price. In his Blog entry Improved vDisk Performance for Ldoms, Stefan Hinker explained how the new implementation of the vdisk/vds software stack in Solaris 11.1 SRU 19 (and a Solaris 10 patch sortly afterwards) significantly improves latency and throughput of the vitual disk IO.

    Read the article

  • How to configure chrome to open magnet url's with deluge?

    - by michael_n
    After upgrading to Ubuntu 11.04 (natty) from 10.10, I can no longer open magnet (torrent) links in Chromium, and set deluge to automatically open and accept the url. (Edit: currently ".torrent" files are not a problem, but magnet url's, e.g. of the form "magnet:?xt=urn:...", are now the only problem. Not sure if something updated...?) Rather, now only transmission will automatically open torrents, magnet links, etc. There doesn't seem to be a way to set deluge to be the default torrent client. (And, there also doesn't seem to be a "default application" setting for bittorrent client to replace transmission w/ deluge.) Notes: I found some old threads on this issue, and only a one or two newer ones. The newer threads seem to suggest xdg-open is to blame. But not many people seem to be running into this problem, so... maybe it's just me? Not using firefox, so manually setting apps for mime-types or extensions doesn't work (that's not an option in chrome/chromium, afaik -- you have to rely on the OS) I uninstalled transmission, and then basically nothing happened when clicking on torrent/magnet links. running from the shell also opens transmission (not deluge): xdg-open "magnet:?xt=urn:bt..&tr=http://tracker.....com/announce" My current url handlers are: $ gconftool -a /desktop/gnome/url-handlers/magnet command = deluge "%s" needs_terminal = false enabled = true The only work-around I have (which does work) is to rename /usr/bin/transmission-gtk{,.bak} and create my own /usr/bin/transmission-gtk : $ cat /usr/bin/transmission-gtk #!/bin/bash deluge "$@" Anyone else run into this, know of a bug, workaround, or...?

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >