Search Results

Search found 197 results on 8 pages for 'annoyance'.

Page 6/8 | < Previous Page | 2 3 4 5 6 7 8  | Next Page >

  • Why did mislav-will_paginate start adding so much garbage to urls between rails 2.3.2 and 2.3.5?

    - by user30997
    I've used will_paginate in a number of projects now, but when I moved one of them to Rails 2.3.5, clicking on any of the pagination links (page number, next, prev, etc.,) went from getting nice URLs like this: http://foo.com/user/1/date/2005_01_31/phone/555-6161 to this: http://foo.com/?options[]=user&options[]=date&options[]=2005_01_31&options[]=phone&options[]=555-6161 I have a route that looks like this that is probably the source of the 'options' keyword: map.connect '/browse/*options', :controller=>'assets', :action=>'browse' It's enough of an annoyance that I'm willing to roll a paginator to get around this if there isn't a way to get back to where I was before. Is there a way to get will_paginate to turn array-style routes into sane urls again? Thanks.

    Read the article

  • Why does a function that takes IEnumerable<interface> not accept IEnumerable<class>?

    - by Matt Whitfield
    Say, for instance, I have a class: public class MyFoo : IMyBar { ... } Then, I would want to use the following code: List<MyFoo> classList = new List<MyFoo>(); classList.Add(new MyFoo(1)); classList.Add(new MyFoo(2)); classList.Add(new MyFoo(3)); List<IMyBar> interfaceList = new List<IMyBar>(classList); But this produces the error: `Argument '1': cannot convert from 'IEnumerable<MyFoo>' to 'IEnumerable<IMyBar>' Why is this? Since MyFoo implements IMyBar, one would expect that an IEnumerable of MyFoo could be treated as an IEnumerable of IMyBar. A mundane real-world example being producing a list of cars, and then being told that it wasn't a list of vehicles. It's only a minor annoyance, but if anyone can shed some light on this, I would be much obliged.

    Read the article

  • Need post-it notes that don't fall off the whiteboard after a week.

    - by jdv
    In my company, we plan our develpment work with scrum. We track progress using post-it stickies on a big whiteboard, and it works great. It is my understanding that's kind of standard. We are just one location, so we don't need or want to do this electronically. But to our (and the Q/A rep's) annoyance, the sticky notes begin to fall off the whiteboard after a week or two, or even sooner if you stick them on top of each other. I've experimented with extra tape on the stickies. That helped, but it also ruins the whiteboard. So I am looking for a pragmatic and preferably low-cost alternative. Are some post-it brands better than others? Or do you have another solution for a scrum board does not suffer from this?

    Read the article

  • Can't read query string if default index file name is omitted?

    - by Mike
    Is there an issue with IIS or ASP Classic where *Request.ServerVariables("QUERY_STRING")* returns blank if no default file name is given in the URL? On my local developer machine, I can do http://localhost/xslt/?opcs/abc which returns "opcs/abc". However, on our ancient web server, it returns nothing. I have to explicitly give it the default file name in the URL. Like so http://localhost/xslt/default.asp?opcs/abc While nothing too major, it is a little bit of a annoyance. One way I can maybe think of remidying the problem is have Javascript read the URL and return everything after the ?. Unfortunately, I do not know what version of IIS or ASP we are using. Thank you.

    Read the article

  • Compile C++ program on Mac to run on Linux

    - by mav
    I have an application that I wrote in C++/SDL, using FMOD library. The app is portable and compiles without any code change on Mac and on Linux. But one annoyance is that when I want to ship Linux version, I have to run my Linux box, copy the source code over there (over USB drive, because I have no network there, it's an old laptop) and compile it, then copy it again over USB to my Mac and upload it. My question is - is there a better way of doing it? Ideally, could I compile the app to run on Linux directly from Xcode, where I compile it for Mac?

    Read the article

  • .NET values lookup

    - by Maciej
    Hi, I have a feeling of missing something obvious. UDP receiver application. It holds a collection of valid UDP sender IPs - only guys with IP on that list will be considered. Since that list must be looked at on every packet and UDPs are so volatile, that operation must be maximum fast. Good choice is Dictionary but it is a key-value structure and what I actually need here is a dictionary-like (hash lookup) key only structure. Is there something like that? Small annoyance rather than a bug but still. I can still use Dictionary Thanks, M.

    Read the article

  • FreeRTOS, Eclipse IDE, and Syntax Errors

    - by MSumulong
    I have a slight annoyance when dealing with FreeRTOS code in Eclipse and I'm not sure if it's just me or if other people have this issue too but I see a lot of syntax errors highlighted in my code but it compiles/executes fine. The syntax errors seem to be caused by FreeRTOS specific code like: signed portCHAR *x; or vSemaphoreCreateBinary (semaphore); or signed portBASE_TYPE gpsTaskStart (void) { return xTaskCreate (vGPSTask, (const signed portCHAR * const) "GPS", configMINIMAL_STACK_SIZE, NULL, (tskIDLE_PRIORITY + 1), &taskHandles [TASKHANDLE_GPS]); } I was wondering if there was a way to configure Eclipse to parse this syntax properly.

    Read the article

  • Shortcuts and MSI updates

    - by Filip Navara
    We have an installer for application that is compiled using WiX and each version is updated using a new setup package. The installer creates advertised shortcut in Start menu and users often copy this shortcut to desktop or other location. During an application update a major upgrade is performed and the old shortcuts are removed, which causes the ones copied by users to disappear. This causes a major annoyance to the users. Is there a way to update advertised shortcuts when doing MSI major upgrade (ie. different product code)? Or, is there a way to allow minor updates by just running the setup.msi file (without passing a REINSTALLMODE option on the command line)? Or, is the only way to solve this problem to use non-advertised shortcuts?

    Read the article

  • Django dictionary in templates: Grab key from another objects attribute

    - by Jordan Messina
    I have a dictionary called number_devices I'm passing to a template, the dictionary keys are the ids of a list of objects I'm also passing to the template (called implementations). I'm iterating over the list of objects and then trying to use the object.id to get a value out of the dict like so: {% for implementation in implementations %} {{ number_devices.implementation.id }} {% endfor %} Unfortunately number_devices.implementation is evaluated first, then the result.id is evaluated obviously returning and displaying nothing. I can't use parentheses like: {{ number_devices.(implementation.id) }} because I get a parse error. How do I get around this annoyance in Django templates? Thanks for any help!

    Read the article

  • Automatically "upgrade" user settings from previous version of app.config file?

    - by SqlRyan
    Every time I compile my app and the version number changes (I have an auto-incrementing build number), I lose the user-configured app.config settings, since they're stored in the AppData folder for a specific version. Essentially, every release of my application starts from scratch as far as user settings go. While this is a mild annoyance in development, it raises the question as I approach deployment/release - if I use the app.config to store my user settings, will the user's personalized settings be hosed every time they install a patch that changes the version number of my app? If so, is there an easy way to "upgrade" the settings from the previous release? I know that using HKCU in the registry is another option, but I like the ease of the My.Settings namespace, and I'd like to stay with app.config. Another SO question asks something similar, though the answer doesn't seem that clear. Will setting my MSI so it asks the user to upgrade be enough to preserve these user-level settings?

    Read the article

  • Can Visual Studio exclude certain folders when searching for header files?

    - by identitycrisisuk
    I'm having trouble with a library that we are using, which has two copies of header files that are needed - one which we are modifying and building from and another which is automatically created during the build process. I don't fully know why or really want to change this but it can cause a bit of annoyance when on random occasions the go to definition function takes you to the auto created header instead of the one used to build. Usually you can spot it but sometimes you don't and make changes to the auto created one, which are then overwritten or sometimes stay around for a while so that something works on your machine but breaks on other peoples. I don't know if there is any way around this as the auto created folder is in the additional include directories of some of the projects in the solution but I just thought I would ask if there was any good way of reducing the chance of this annoying situation cropping up.

    Read the article

  • How to disambiguate subdirs with the same name in the Projects list?

    - by jlstrecker
    My Qt project has 2 subdirs/subprojects with the same name. Their directories are myproject/node and myproject/compiler/test/node. The problem (or annoyance) is that, in the Projects list in Qt, both subdirs are listed as "node". So you have to open them up to figure out which is which. myproject.pro is like this: TEMPLATE = subdirs QMAKE_CLEAN = Makefile SUBDIRS += \ compiler_test_node \ node \ ... compiler_test_node.subdir = compiler/test/node node.depends = compiler_vuo_compile ... Without renaming the myproject/compiler/test/node directory, is there a way to make it show up with a different name in the Projects list?

    Read the article

  • Eclipse and SVN: Missing .project file.

    - by DHC
    Hi, I'm working on a uni project with a few other people using SVN. Much to my annoyance the .project file was removed from the repository since "it contains platform specific information". However, this has obviously broken my setup in Eclipse, giving me the error: Problems occurred opening the selected resources. The project description file (.project) for '___' is missing. This file contains important information about the project. The project will not function properly until this file is restored. Any suggestions? Thank you.

    Read the article

  • Common files in output directories in a C# program

    - by Net Citizen
    My VS2008 solution has the following setup. Program1 Program2 Common.dll (used and referenced by both Program1 and Program2) In debug mode I like to set my output directory to Program Files\Productname, because some code will get the exe path for various reasons. My problem is that Program1 when compiled, will give an error that it could not copy Common.dll if Program2 is started. And vise versa. The annoyance here is that I don't even make changes to Common.dll that often, but 100% of the time it will try to copy it, not only when there are changes. I end up having to close all programs, and then build and then start them. So my question is, how can I only have VS2008 copy the Common.dll if there are changes inside the Common.dll project?

    Read the article

  • Avoiding Flicker with JQuery Tabs

    - by Damon
    I am a huge fan of JQuery because it seems like every time I want to do something it has a plugin that already does it.  Adding a tabbed interface to a web page was always quite an annoyance, but JQuery UI offers a pretty descent tabs solution (click here to see it).  If you read through the documentation, you'll find that you can create a tabbed interface by calling the tabs() method on an element containing an unordered list.  The only problem that I've experienced with the method is that on slower machines you can see the unordered list render out in its original state before being updated into the final tabbed interface.  A quick way to fix that issues is to set the CSS display property of the element to none, then call the show() method directly after calling the tabs() method.  This keeps the element completely hidden while JQuery sets up the tabs interface and eliminates the flicker. <SCRIPT type="text/javascript">      $(function()      {           $("#tabs").tabs();           $("#tabs").show();      }); </SCRIPT> <div id="tabs" style="display:none;">     <ul>         <li><a href="#tabs-1">First Tab</a></li>         <li><a href="#tabs-2">Second Tab</a></li>         <li><a href="#tabs-3">Third Tab</a></li>     </ul>     ... </div>

    Read the article

  • Inverted schedctl usage in the JVM

    - by Dave
    The schedctl facility in Solaris allows a thread to request that the kernel defer involuntary preemption for a brief period. The mechanism is strictly advisory - the kernel can opt to ignore the request. Schedctl is typically used to bracket lock critical sections. That, in turn, can avoid convoying -- threads piling up on a critical section behind a preempted lock-holder -- and other lock-related performance pathologies. If you're interested see the man pages for schedctl_start() and schedctl_stop() and the schedctl.h include file. The implementation is very efficient. schedctl_start(), which asks that preemption be deferred, simply stores into a thread-specific structure -- the schedctl block -- that the kernel maps into user-space. Similarly, schedctl_stop() clears the flag set by schedctl_stop() and then checks a "preemption pending" flag in the block. Normally, this will be false, but if set schedctl_stop() will yield to politely grant the CPU to other threads. Note that you can't abuse this facility for long-term preemption avoidance as the deferral is brief. If your thread exceeds the grace period the kernel will preempt it and transiently degrade its effective scheduling priority. Further reading : US05937187 and various papers by Andy Tucker. We'll now switch topics to the implementation of the "synchronized" locking construct in the HotSpot JVM. If a lock is contended then on multiprocessor systems we'll spin briefly to try to avoid context switching. Context switching is wasted work and inflicts various cache and TLB penalties on the threads involved. If context switching were "free" then we'd never spin to avoid switching, but that's not the case. We use an adaptive spin-then-park strategy. One potentially undesirable outcome is that we can be preempted while spinning. When our spinning thread is finally rescheduled the lock may or may not be available. If not, we'll spin and then potentially park (block) again, thus suffering a 2nd context switch. Recall that the reason we spin is to avoid context switching. To avoid this scenario I've found it useful to enable schedctl to request deferral while spinning. But while spinning I've arranged for the code to periodically check or poll the "preemption pending" flag. If that's found set we simply abandon our spinning attempt and park immediately. This avoids the double context-switch scenario above. One annoyance is that the schedctl blocks for the threads in a given process are tightly packed on special pages mapped from kernel space into user-land. As such, writes to the schedctl blocks can cause false sharing on other adjacent blocks. Hopefully the kernel folks will make changes to avoid this by padding and aligning the blocks to ensure that one cache line underlies at most one schedctl block at any one time.

    Read the article

  • Reporting on common code smells : A POC

    - by Dave Ballantyne
    Over the past few blog entries, I’ve been looking at parsing TSQL scripts in a variety of ways for a variety of tasks.  In my last entry ‘How to prevent ‘Select *’ : The elegant way’, I looked at parsing SQL to report upon uses of SELECT *.  The obvious question leading on from this is, “Great, what about other code smells ?”  Well, using the language service parser to do that was turning out to be a bit of a hard job,  sure I was getting tokens but no real context.  I wasn't even being told when an end of statement had been reached. One of the other parsing options available from Microsoft is exposed in the assembly ‘Microsoft.SqlServer.TransactSql.ScriptDom’,  this is ,I believe, installed with the client development tools with SQLServer.  It is much more feature rich than the original parser I had used and breaks a TSQL script into intuitive classes for analysis. So, what sort of smells can I now find using it ?  Well, for an opening gambit quite a nice little list. Use of NOLOCK Set of READ UNCOMMITTED Use of SELECT * Insert without column references Explicit datatype conversion on Sargs Cross server selects Non use of two-part naming convention Table and Query hint usage Changes in set options Use of single line comments Use of ordinal column positions in ORDER BY clause Now, lets not argue the point that “It depends” as smells on some of these, but as an academic exercise it is quite interesting.  The code is available from this link :https://www.dropbox.com/s/rfk32sou4fzl2cw/TSQLDomTest.zip  All the usual disclaimers apply to this code, I cannot be held responsible for anything ranging from mild annoyance through to universe destruction due to the use of this code or examples. The zip file contains a powershell script and my test cases.  The assembly used requires .Net 4 to run, which means that you will need powershell 3 ( though im running through PowerGUI and all works ok ) .  The code searches for all .sql files in the folder hierarchy for the workingpath,  you can override this if you want by simply changing the $Folder variable, and processes each in turn for the smells.  Feedback is not great at the moment, all it does is output to an xml file (Smells.xml) the offset position and a description of the smell found. Right now, I am interested in your feedback.  What do you think ?  Is this (or should it be) more than an academic exercise ?  Can tooling such as this be used as some form of code quality measure ?  Does it Work ? Do you have a case listed above which is not being reported ? Do you have a case that you would love to be reported ? Let me know , please mailto: [email protected]. Thanks

    Read the article

  • Automatically change resolution when not in dock

    - by jwir3
    I have Ubuntu 11.04 (yep, I know it's old news) on my Lenovo W520. At home, I have a dock with dual monitors. I have a pretty decent setup - things work almost perfectly (hence the reason I'm reluctant to upgrade... that and I'm not 100% sold on Unity). Anyway, the only annoyance I have is that when I'm on travel, I use the laptop screen. When I un-dock the laptop, I need to manually go into nvidia x-server settings and change the resolution from 'Auto' to 1920x1200, or it will think I have two screens, and my mouse pointer will be able to go way off the left side of the screen. This isn't a big deal, but I need to do it every time I restart the x-server (so if I reboot, or have to kill it, etc...) What would be really nice is if there was a way for it to automatically detect whether or not there is external monitors (which it seems to do already), and switch into the mode I select, depending on which monitors are connected. Is there any way to accomplish this? I've posted my xorg.conf file for reference. # nvidia-settings: X configuration file generated by nvidia-settings # nvidia-settings: version 270.29 (buildd@allspice) Fri Feb 25 14:42:07 UTC 2011 # nvidia-xconfig: X configuration file generated by nvidia-xconfig # nvidia-xconfig: version 275.19 ([email protected]) Tue Jul 12 18:35:38 PDT 2011 #Section "Monitor" # Identifier "Monitor1" # VendorName "Lenovo" # ModelName "ThinkpadLCD" # #HorizSync 28.0 - 33.0 # #VertRefresh 43.0 - 72.0 # #Option "DPMS" #EndSection Section "ServerLayout" Identifier "Layout0" Screen 0 "Screen0" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "Mouse0" "CorePointer" Option "Xinerama" "0" EndSection Section "InputDevice" Identifier "Mouse0" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/psaux" Option "Emulate3Buttons" "no" Option "ZAxisMapping" "4 5" EndSection Section "InputDevice" # generated from default Identifier "Keyboard0" Driver "kbd" EndSection Section "Monitor" Identifier "Monitor0" VendorName "Unknown" ModelName "DELL U2410" HorizSync 30.0 - 81.0 VertRefresh 56.0 - 76.0 Option "DPMS" EndSection Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BoardName "Quadro 1000M" Option "RegistryDwords" "EnableBrightnessControl=1" EndSection Section "Screen" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+120, DFP-6: nvidia-auto-select +1920+0" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +0+120, DFP-5: nvidia-auto-select +1920+0" # Removed Option "metamodes" "DFP-0: nvidia-auto-select +1920+419, DFP-5: nvidia-auto-select +3840+0, DFP-6: nvidia-auto-select +0+0" # Removed Option "metamodes" "DFP-5: nvidia-auto-select +0+0, DFP-6: 1920x1200 +1920+0" Identifier "Screen0" Device "Device0" Monitor "Monitor0" DefaultDepth 24 Option "NoLogo" "True" Option "TwinViewXineramaInfoOrder" "DFP-0" Option "TwinView" "1" Option "metamodes" "DFP-5: nvidia-auto-select +1920+0, DFP-6: 1920x1200 +0+0" SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • smtp.gmail.com from bash gives "Error in certificate: Peer's certificate issuer is not recognized."

    - by ndasusers
    I needed my script to email admin if there is a problem, and the company only uses Gmail. Following a few posts instructions I was able to set up mailx using a .mailrc file. there was first the error of nss-config-dir I solved that by copying some .db files from a firefox directory. to ./certs and aiming to it in mailrc. A mail was sent. However, the error above came up. By some miracle, there was a Google certificate in the .db. It showed up with this command: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI GeoTrust SSL CA ,, VeriSign Class 3 Secure Server CA - G3 ,, Microsoft Internet Authority ,, VeriSign Class 3 Extended Validation SSL CA ,, Akamai Subordinate CA 3 ,, MSIT Machine Auth CA 2 ,, Google Internet Authority ,, Most likely, it can be ignored, because the mail worked anyway. Finally, after pulling some hair and many googles, I found out how to rid myself of the annoyance. First, export the existing certificate to a ASSCII file: ~]$ certutil -L -n 'Google Internet Authority' -d certs -a > google.cert.asc Now re-import that file, and mark it as a trusted for SSL certificates, ala: ~]$ certutil -A -t "C,," -n 'Google Internet Authority' -d certs -i google.cert.asc After this, listing shows it trusted: ~]$ certutil -L -d certs Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI ... Google Internet Authority C,, And mailx sends out with no hitch. ~]$ /bin/mailx -A gmail -s "Whadda ya no" [email protected] ho ho ho EOT ~]$ I hope it is helpful to someone looking to be done with the error. Also, I am curious about somethings. How could I get this certificate, if it were not in the mozilla database by chance? Is there for instance, something like this? ~]$ certutil -A -t "C,," \ -n 'gmail.com' \ -d certs \ -i 'http://google.com/cert/this...'

    Read the article

  • Networking "chokes" on Windows 7 64 bit

    - by Rohit Nair
    I've been having this problem for some months now, and I have been unable to figure out a solution, or even the cause. At random points throughout the day, my internet connectivity "freezes". I don't get disconnected from my local wireless network. My router doesn't get disconnected from the world. However, for some reason, my computer stops receiving packets. If I'm playing an MMO ( World of Warcraft, in this case, but it has happened with Eve Online as well ) all activity just freezes. If I try to browse, Opera, Firefox and IE all stall at "Waiting for google.com..." or whatever the hostname may be. Inspection with a packet sniffer seems to reveal that there are no incoming packets. Here's the interesting part. Disconnecting from my wireless network and reconnecting fixes the issue. Obviously this led me to conclude that it was a problem with my router or wireless card. However, I have tweaked all the settings on my router that I could think of, including things like QoS, AP Isolation, etc. with no change. My wireless card doesn't really have that many options, and I have uninstalled and reinstalled drivers a few times without any change. Windows Firewall on/off doesn't make a difference. Anyone have any suggestions for debugging this? It's becoming an annoyance.

    Read the article

  • Windows 7: Image thumbnails fail to appear

    - by Fopedush
    All right, I've got a pretty strange one here. Since I installed Windows 7 on this machine some time ago, image thumbnails have never worked properly. For the vast majority of images, they completely fail to show up, showing the icon of the default image viewing application instead. Please note that the “Always show icons, never thumbnails” option in folder options is not checked. I've taken a screenshot demonstrating the problem, here: Sometimes, a few image thumbnails will show up correctly, maybe about one in ten, with the rest failing as well. Another screenshot, with a handful of thumbnails visible, can be seen here: Windows explorer does not appear to make any effort to populate the missing thumbnails, and there is no appreciable CPU usage. I can leave a window with missing thumbnails open all night and they will still never appear. Newly created images never generate thumbnails, only images that have been on the system since day one will occasionally show them. This leads me to believe that explorer is set to show thumbnails, but whatever process is supposed to be in charge of actually generating them has failed somehow. In previous versions of windows, explorer.exe itself was responsible for thumbnail generation – has this changed? Any suggestions at all – even if you aren't sure that they will work – are welcome. I'd hate to have to wipe and reinstall on this machine for such a minor annoyance.

    Read the article

  • Why does just splitting an Ethernet cable not work?

    - by Sin Jeong-hun
    I thought the Ethernet is logically a one-line communication bus (for argument's sake, I am excluding hubs). All machines attached on the bus hears the same signals and the machines themselves try to avoid collisions by randomly backing off. http://computer.howstuffworks.com/ethernet6.htm If so, why would splitting one Ethernet line from my home router into two and connecting two computers not work? Why do I have to add a switch to it? *What the Internet said would not work. [4 port home router] ------[one Ethernet cable]-----[simple splitter]======[two computers] *What the Internet said I should do [4 port home router] ------[one Ethernet cable]-----[switch]======[two computers] Is this because of the signal degradation (reduced electric current)? Thank you for all the answers! The reason why I did not just use the two ports of my home router is... The 4-port gigabit router is in my room, and I had put a computer in another room (also my room, though). Since a wired network is far more reliable and secure, I had bought a long Ethernet cable and and connected the computer to the router. Now I was thinking about adding another computer to that room. I could buy another long Ethernet cable, but then there will be two cables between the rooms. The one line already is a minor annoyance, so I thought if I could share the one line between the two computers in that room. A switch would work, but it requires power and is a little bit pricey. That is why I wondered why it would not work to simply split the physical Ethernet cable. Apparently I do not completely understand how Ethernet and a switch work. I just have some bit of knowledge I heard in my college class.

    Read the article

  • Login with Enterprise Principal Name using sssd AD backend in Ubuntu 14.04 LTS

    - by Vinícius Ferrão
    I’m running sssd version 1.11 with the AD backend in Ubuntu 14.04 LTS (1.11.5-1ubuntu3) to authenticate users from Active Directory running on Windows Server 2012 R2, and I’m trying to achieve logins with the User Principal Name for all users of the domain. But the UPN are always Enterprise Principal Names. Let-me illustrate the problem with my user account: Domain: local.example.com sAMAccountName: ferrao UPN: [email protected] (there’s no local in the UPN) I can successfully login with the sAMAccountName atribute, which is fine, but I can’t login with [email protected] which is my UPN. The optimum solution for me is to allow logins from sAMAccountName and the UPN (User Principal Name). If’s not possible, the UPN should be the right way instead of the sAMAccountName. Another annoyance is the homedir pattern with those options in sssd.conf: default_shell = /bin/bash fallback_homedir = /home/%d/%u What I would like to achieve is separated home directories from the EPN. For example: /home/example.com/user /home/whatever.example.com/user But with this pattern I can’t map the way I would like to do. I’ve looked through man pages and was unable to find any answers for this issues. Thanks,

    Read the article

  • Detecting login credentials abuse

    Greetings. I am the webmaster for a small, growing industrial association. Soon, I will have to implement a restricted, members-only section for the website. The problem is that our organization membership both includes big companies as well as amateur “clubs” (it's a relatively new industry…). It is clear that those clubs will share the login ID they will use to log onto our website. The problem is to detect whether one of their members will share the login credentials with people who would not normally supposed to be accessing the website (there is no objection for such a club to have all it’s members get on the website). I have thought about logging along with each sign-on the IP address as well as the OS and the browser used; if the OS/Browser stays constant and there are no more than, say, 10 different IP addresses, the account is clearly used by very few different computers. But if there are 50 OS/Browser combination and 150 different IPs, the credentials have obviously been disseminated far, and there would be then cause for action, such as modifying the password. Of course, it is extremely annoying when your password is being unilaterally changed. So, for this problem, I thought about allowing the “clubs” to manage their own list of sub-accounts, and therefore if abuse is suspected, the user responsible would be easily pinned-down, and this “sub-member” alone would face the annoyance of a password change. Question: What potential problems would anyone see with such an approach?

    Read the article

  • Monospace font which supports at least both of Korean hangul and the Georgian alphabet?

    - by hippietrail
    Being both a language enthusiast and a programmer, I find myself often doing programming or text processing involving foreign language alphabets and scripts. One annoyance however is that CJK fonts (those which support Chinese, Japanese, and/or Korean) usually only contain glyphs for Latin, Greek, and Cyrillic at best. Often the Asian glyphs will be beautiful but the other glyphs can be quite ugly. Just as often in text editors you can only choose a single font, not one for CJKV and one for other, which will be each used for rendering the appropriate characters. Korean is one of the languages I'm most interested in currently. I only need hangul / hangeul for monospaced editing, hanja isn't common enough to be a problem. Another of the languages I'm currently involved in is Georgian, which has its own alphabet which is a little exotic but has pretty good support in common fonts on Windows and *nix. But I am as yet unable to find a font with good Korean glyphs and also Georgian glyphs. My editor of choice is gVim, so an answer telling me how to set it to use two fonts together would be just as good. Currently I'm using it mostly under Windows 7 so a vim-specific solution would be needed rather than a *nix-specific solution.

    Read the article

< Previous Page | 2 3 4 5 6 7 8  | Next Page >