Search Results

Search found 38522 results on 1541 pages for 'single source'.

Page 391/1541 | < Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >

  • LDAP encrypt attribute that extends userpassword

    - by Foezjie
    In my current LDAP schema I have an objectclass (let's call it group) that has 2 attributes that extend userpassword. Like this: attributeType ( groupAttributes:12 NAME 'groupPassword1' SUP userPassword SINGLE-VALUE ) attributeType ( groupAttributes:13 NAME 'groupPassword2' SUP userPassword SINGLE-VALUE ) group extends organisation so already has a userpassword attribute. If I use that to enter a new group using PHPLDAPAdmin it uses SSHA (by default) and encrypts/hashes the password I entered. But the passwords I entered for groupPassword1 en groupPassword2 don't get encrypted. Is there a way to make it so that those attributes are encrypted too?

    Read the article

  • MySQL slow query log logging all queries

    - by Blanka
    We have a MySQL 5.1.52 Percona Server 11.6 instance that suddenly started logging every single query to the slow query log. The long_query_time configuration is set to 1, yet, suddenly we're seeing every single query (e.g. just saw one that took 0.000563s!). As a result, our log files are growing at an insane pace. We just had to truncate a 180G slow query log file. I tried setting the long_query_time variable to a really large number to see if it stopped altogether (1000000), but same result. show global variables like 'general_log%'; +------------------+--------------------------+ | Variable_name | Value | +------------------+--------------------------+ | general_log | OFF | | general_log_file | /usr2/mysql/data/db4.log | +------------------+--------------------------+ 2 rows in set (0.00 sec) show global variables like 'slow_query_log%'; +---------------------------------------+-------------------------------+ | Variable_name | Value | +---------------------------------------+-------------------------------+ | slow_query_log | ON | | slow_query_log_file | /usr2/mysql/data/db4-slow.log | | slow_query_log_microseconds_timestamp | OFF | +---------------------------------------+-------------------------------+ 3 rows in set (0.00 sec) show global variables like 'long%'; +-----------------+----------+ | Variable_name | Value | +-----------------+----------+ | long_query_time | 1.000000 | +-----------------+----------+ 1 row in set (0.00 sec)

    Read the article

  • Event handler generation in Visual Studio 2012

    - by Jalpesh P. Vadgama
    This post will be a part of Visual Studio 2012 feature series There are lots of new features there in visual studio 2012. Event handler generation is one of them. In earlier version of visual studio there was no way to create event handler from source view directly.  Now visual studio 2012 have event handler generation functionality. So if you are editing an event view in source view intellisense will display add new event handler template and once you click on it. It will create a new event handler in the cs file. It will also put a eventhandler name against event name so you don’t need to write that. So, let’s take a simple example of button click event so once I write onclick attribute their smart intellisense will pop up . Now once you click on <Create New Event> It will create event handler in .cs file like following. It will also put submitButton_Click on onClick attribute. Hope you liked it. Stay tuned for more. Till then happy programming..

    Read the article

  • SQL Server store procedure encrypt is safe?

    - by George2
    I am using SQL Server 2008 Enterprise on Windows Server 2003 Enterprise. I developed some store procedure for SQL Server and the machine installed with SQL Server may not be fully under my control (may be used by un-trusted 3rd party). I want to protect my store procedure T-SQL source code (i.e. not viewable by some other party) by using encrypt store procedure function provided by SQL Server. I am not sure whether encrypt store procedure is 100% safe and whether the administrator of the machine (installed with SQL Server) still have ways to view store procedure's source codes? thanks in advance, George

    Read the article

  • Debian/Ubuntu apt or pbuilder without root privileges?

    - by Tem Pora
    I want to use apt or pbuilder to build a package in user's home directory. The home directory has enough space to hold the package's source, its dependencies and binary output. But the apt and pbuilder documents say that you have to be a root user (sudo) to use it. It's frustrating, as the only way now I have at my disposal is to build the package from source or use the dumba$$ (sorry for bad language) dpkg and in both cases figure out every dependency manually, create the dir layout manually and install the built things manually. Now if I can do all these things manually, why the tool writers (apt) think that doing so using their tool (apt) is somehow more special/dangerous? I don't want to use root privileges JUST to build and test a user-land package. If I am NOT allowed to do anything outside my home dir then why NOT the apt or pbuilder type commands be allowed to "build" something in my home dir without root privileges? I just want to use their functionality. It seems there is nothing like Gentoo Prefix from Debian

    Read the article

  • Commands in Task-It - Part 1

    Download Source Code NOTE: To run the source code provided your will need to update to the RC (release candidate) versions of Silverlight 4 and VisualStudio 2010. In recent blog posts, like my MVVM post, I used Commands to invoke actions, like Saving a record. In this rather simplistic sample I will talk about the basics of Commands, and in my next post will get deeper into it. What is a Command? I remember the first time a UI designer used the word "command" I wasn't really sure what she was referring to. I later realized that it is just a term that is used to represent some UI control that can invoke an action, like a Button, HyperlinkButton, RadMenuItem, RadRadioButton, etc. Why should we use Commands? I'm sure you're familiar with the code behind approach of handling events. For example, if you had a Button and a RadMenuItem that ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Common filesystem for servers behind a rackspace load balancer

    - by thanos panousis
    Our PHP application consists of a single web server that will receive files from clients and perform a CPU-intensive analysis on them. Right now, analysis of a single user upload can take 3sec to conclude and take 100% CPU. This makes our system capacity amount to 1/3 requests per second. My team's requirement is to increase capacity without a lot of code reengineering. A possible solution would be to set up a load balancer in front of multiple servers running the same app, connecting to a common DB. The problem is that the analysis outputs files on disk. A load balancer would increase capacity, but then files won't be available between servers so consequent client requests may fail. We are hosted on Rackspace, is there a way to configure some sort of "common" storage for all servers, without having to rewrite our file persistance code? Current code relies on simple fopens etc. What are our options?

    Read the article

  • Copy Ubuntu distro with all settings from one computer to a different one

    - by theFisher86
    I'd like to copy my exact setup from my computer at work to my computer at home. I'm trying to figure out how to go about doing that. So far I've figured this much out. On the source computer run dpkg --get-selections > installed-software and backup the installed-software file Backup /etc/apt/sources.list Backup /usr/share/applications/ to save all my custom Quicklists Backup /etc/fstab to save all my network mounts Backup /usr/share/themes/ to save the customization I've done to my themes I'm also going to backup my entire HOME directory. Once I get to the destination computer I'm going to first do just a fresh install of 11.10 Then I'll copy over my HOME directory, /etc/apt/sources.list, /usr/share/appications, /etc/fstab and /usr/share/themes/ Then I'm going to run dpkg --set-selections < installed-software Followed by dselect That should install all of my apps for me. I'm wondering if there's a way/need to backup dconf and gconf settings from the source computer? I guess that's my ultimate question. I'd also like any notes on anything else that might need backed up as well before I undertake this project. I hope this post is legit, I figured other people would be interested in knowing this process and I don't see any other questions that seem to really document this on here. I'd also like to further this project and have each computer routinely backup all the necessary files so that both computer are basically identical at all times. That's stage 2 though...

    Read the article

  • Is this a valid backup strategy for MongoDB?

    - by James Simpson
    I've got a single dedicated server with a MongoDB database of around 10GB. I need to do daily backups, but I can't have downtime with the database. Is it possible to use a replica set on a single disk (with 2 instances of mongod running on different ports), and simply take the secondary one offline and backup the data files to an offsite storage such as S3 (journaling is turned on)? Or would using master/slave be better than a replica set? Is this viable, and if so, what potential problems could I have? If not, how do I conceptualize this to work?

    Read the article

  • Multi Threading - How to split the tasks

    - by Motig
    if I have a game engine with the basic 'game engine' components, what is the best way to 'split' the tasks with a multi-threaded approach? Assuming I have the standard components of: Rendering Physics Scripts Networking And a quad-core, I see two ways of multi-threading: Option A ('Vertical'): Using this approach I can allow one core for each component of the engine; e.g. one core for the Rendering task, one for the Physics, etc. Advantages: I do not need to worry about thread-safety within each component I can take advantage of special optimizations provided for single-threaded access (e.g. DirectX offers a flag that can be set to tell it that you will only use single-threading) Option B ('Horizontal'): Using this approach, each task may be split up into 1 <= n <= numCores threads, and executed simultaneously, one after the other. Advantages: Allows for work-sharing, i.e. each thread can take over work still remaining as the others are still processing I can take advantage of libraries that are designed for multi-threading (i.e. ... DirectX) I think, in retrospect, I would pick Option B, but I wanted to hear you guys' thoughts on the matter.

    Read the article

  • Dead-simple USB-based Windows partition cloner?

    - by OverTheRainbow
    Clonezilla is a fine open-source tool, but it requires going through several screens. Since I need to save/restore the same Windows partition, I was wondering if someone knew of a tool (open-source or not) that is easier to use and boots off a USB keydrive. Ideally, it'll save the two commands to save/restore a partition, so I just need to boot the host from the USB key, choose the command, and it'll take care of business. Are there solutions that look like this? Thank you. Edit: Here's one among other articles that shows how to tell CZ to run a script to avoid the multiple screens.

    Read the article

  • Complex string matching with fuzzywuzzy

    - by That1Guy
    I'm attempting to write a process that matches obscure strings to a single 'master string' for further processing. I have a lot of data that looks something like this: Basketball Basket Ball Football BasketBallR BBall BBall - r FootB ...and so on. These need to be mapped to a master record like so: Basketball = Basket Ball, BBall Basketball - R = BasketBallR, BBall - r I also have instances of data resembling this format: Football -r FootBall - r-g/H,Q,HH These situations need to be separated into different categories before being mapped. For example FootBall - r-g/H,Q,HH should be: Football - r Football - g Football - H Football - Q Football - HH At this point, it still needs to be mapped to a master record... I've tried several different combinations of fuzzywuzzy matching methods, Levenshtein Distance measurements, regex, etc. and can't seem to find a reliable method to logically associate different naming styles of a single item with a master name. I'm throwing my hands up in desperation. Are there any existing python resources than can help sort out my problem? Are there other options? Can anybody point out an obvious option that I might have overlooked? Basically, any suggestion, solution, resource or alternative method is greatly appreciated.

    Read the article

  • Centrally managing 100+ websites without bankrupting a small company

    - by palintropos
    I'm mainly interested in opinions on the trade-offs between having a single central server all the websites connect to as opposed to each website mirroring a subset of the master database with all the products in it. For example, will I run into severe performance issues (or even security issues, or restrictions) making queries to an offsite database? Will we hit scalability issues we can't handle early on from the sheer bandwidth required to maintain this? If we do go with something like a script that keeps smaller databases (each containing a subset of the central master data) in sync, what sorts of issues will we likely encounter there? I would really like the opinions of people far more knowledgeable than I am regarding the pros and cons of both setups and what headaches we are likely to encounter. CLARIFICATION: This should not be viewed as a question about whether we should implement one database vs multiple databases. This question has been answered numerous times. The question is regarding the pros and cons for a deployment like this having the ability to manage all the websites centrally (one server) vs trying to keep them all in sync if they each have their own db (multiple servers). REAL-WORLD EXAMPLE: We are a t-shirt company, and we have individual websites for our different kinds of t-shirts, but we're looking at a central order management integrated with our single shopping cart (which is ColdFusion + MySQL). Now, let's say we have a t-shirt that's on 10 of our websites and we change an image for it. Ideally we would change that in one place and the change would propagate, but how would we set this up?

    Read the article

  • List all BPM Processes for a user

    - by kasriniv
    Hello, Happy to start contributing to this blog..  The title of the blog is probably deceptively simple and warrants an elaboration. Customized BPM workspaces/user interfaces are a fairly common requirement. One of our marquee customers in the online stock trading business, envisioned this user interaction for their BPM application: User logs in to the internal portal Use will have list of roles which he is granted as a drop down list Once user selects the role, a list of processes which user is part of appear. Logged in user can be part of any swimlane role of the process This can be a fairly common/reasonable user-UI interaction pattern. 1. and 2. are easily achievable and hence the subject matter of this blog is the requirement in 3. Objective: Given a username and a role, list all the BPM processes that the user is part of, in any swimlane of any process. Here is quick overview of the major steps/logic in the code: Intialize workflow/BPM  context as usual Get a handle on InstanceQueryService(getInstanceQueryService), InstanceManagementService,        ProcessMetadataService and ProcessModelService List all Processes for that bpmcontext (listProcessMetadataSumary) and get Granted roles to that user For each of the processes [method  getAccessibleProcesss(ProcessMetadataSummary, Set)]for each of the lanes in the process, check if the role granted to the user, matches the roleName for that swimlane. If so, add to output. Notes: The usual caveats apply including BPM APIs are subject to change.  JDeveloper method introspection is your better friend than API documentation :-)... (I am going to try upload the source code  and if it doesnt work, will follow this blog up with the corresponding source code.) Hope this helps.  Ack: Yogesh K, BPM Dev team.

    Read the article

  • Installing GPSBabel on CentOS 5 x86_64

    - by Clint Chaney
    Well first let me say I have no clue about doing anything on my server, I ask my host to do all installs for me. I run a website where users store latitude and longitude coordinates in my database. I would like them to be able to download these waypoints to their gps units. I found a program called GPSBabel that allows this to be done. http://www.gpsbabel.org/ I want to be able to control GPSBabel from PHP using exec() or something along those lines. The problem is that the linux version of the program is a source file and they don't want to build or install it without some source of instructions. Does anyone have experience with installing this? Perhaps know someone that has and that can lead me in the right direction? Any help would be hugely appreciated. I'm pretty much stuck without getting this to work.

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • Apt Stalls When Using HTTP Sources

    - by UltraNurd
    I was getting some to me inexplicable behavior from apt-get/aptitude on an admittedly crusty old webserver. While it was otherwise running fine, as soon as I tried a package upgrade, after a downloading a few updates it would stall completely, then my SSH session hung (and I was unable to reconnect), thus requiring a hard restart. First, I switched to a different package source in /etc/apt/sources.list, but still got the same behavior. At this point I was assuming the NIC was dying in some weird way... but as soon as I changed the package source to use FTP instead of HTTP, everything worked fine, and I was able to upgrade. For now I'm not too concerned since I have an easy work around, but it implies that there's something very weird with my network setup, since it seems to be protocol (or port?) specific. I didn't think any of my NAT setup would affect outbound traffic, but I could be crazy. Any ideas what I should try to look for?

    Read the article

  • Date based sum in Excel / Google Docs spreadsheets

    - by alumb
    I have a bunch of rows with a date and a dollar amount (expenses). I want to produce a list of the days of the month and what the balance of the expenses is. So, for example the 5th entry in the list would be 8/5/2008 and the sum of all the expenses that occurred on or before 8/5/2008. Approximately this is =sumif(D4:D30-A5,">0",E4:E30) but of course that doesn't work (where the source data is dates in D4:D30 and the expenses are in E4:E30). Notes source data can't be sorted for various reasons. must work in google spreadsheets, which is a fairly complete subset of excel's functions.

    Read the article

  • Window 2003 Server - Logon Failure error message in Event Viewer

    - by user45192
    Hi guys, I received alot of event logged in the event viewer with this message. I notice is always the same user id which encounters this error. The user id is use by an application to access the database. However, this account does not exits on this server. How do I trace the services/program use by this user id which causes these error messages? Reason=Unknown user name or bad password&&User Name=&&Domain=&&Logon Type=3&&Logon Process=NtLmSsp&&Authentication Package=NTLM&&Workstation Name=&&Caller User Name=-&&Caller Domain=-&&Caller Logon ID=-&&Caller Process ID=-&&Transited Services=-&&Source Network Address=-&&Source Port=-&&User=SYSTEM&&ComputerName=

    Read the article

  • Scene transitions

    - by Mars
    It's my first time working with actual scenes/states, aka DrawableGameComponents, which work separate from one another. I'm now wondering what's the best way to make transitions between them, and how to affect them from other scenes. Lets say I wanted to "push" one screen to the right, with another one coming in at the same time. Naturally I'd have to keep drawing both, until the transition is complete. And I'd have to adjust the coordinates I'm drawing at while doing it. Is there a way around specifically handling this special case in every single scene? Or of I wanted to fade one into the other. Basically the question stays the same, how would you do that without having to handle it in every single scene? While writing this I'm realizing it will be the same thing for all kinds of transitions. Maybe a central Draw method in the manager could be a solution, where parameters and effects are applied when necessary. But this wouldn't work if objects that are drawn have their own method, and aren't drawn within the scene, or if an effect has to be applied to the whole scene. That means, maybe scenes have to be drawn to their own rendertarget? That way one call to the base class after the normal drawing could be enough, to apply the effects, while drawing it to the main render target. But I once heard there are problems when switching from target to target, back and forth. So is that even a viable option? As you can see, I have some basic ideas how it might work... but nothing specific. I'd like to learn what's the common way to achieve such things, a general way to apply all kinds of transitions.

    Read the article

  • Is white the best base color to start with when planning to shade sprites within Unity?

    - by SpartanDonut
    I'm looking into prototyping a game in Unity which will consist of solid square sprites / tiles. I figure I can represent different types of objects with different colors for each of the tiles in the game. I figure that I can import a single square sprite and shade it appropriately in Unity as opposed to imported squares of many different colors. My experience with adjusting the hue and saturation within Photoshop shows that white is not an easy color to change as things that are white often stay white. My testing in Unity shows that I can change the "color" of a sprite to anything other than white and the sprite is seemingly shaded appropriately, despite what I would have thought given my Photoshop experience. Since white objects do seem to take on the appropriate color shading when changed within Unity my gut tells me that this is the best base color to begin with, meaning that I can import a single white square sprite and simply adjust the color to represent different objects and object states. Is a white sprite actually the best color sprite to begin with and why does something like this work in Unity as opposed to adjusting the hue and saturation within Photoshop?

    Read the article

  • Packaging MATLAB (or, more generally, a large binary, proprietary piece of software)

    - by nfirvine
    I'm trying to package MATLAB for internal distribution, but this could apply to any piece of software with the same architecture. In fact, I'm packaging multiple releases of MATLAB to be installed concurrently. Key things Very large installation size (~4 GB) Composed of a core, and several plugins (toolboxes) Initially, I created a single "source" package (matlab2011b) that builds several .debs (mainly matlab2011b-core and matlab2011b-toolbox-* for each toolbox). The control file is just the standard all: dh $@ There is no Makefile; only copying files. I use a number of debian/*.install files to specify files to copy from a copy of an installation to /usr/lib/. The problem is, every time I build the thing (say, to make a correction to the core package), it recopies every file listed in the *.install file to e.g debian/$packagename/usr/ (the build phase), and then has to bundle that into a .deb file. It takes a long time, on the order of hours, and is doing a lot of extra work. So my questions are: Can you make dh_install do a hardlink copy (like cp -l) to save time? (AFAICT from the man page, no.) Maybe I should just get it to do this in the Makefile? (That's gonna b e big Makefile.) Can you make debuild only rebuild .debs that need rebuilding? Or specify which .debs to rebuild? Is my approach completely stupid? Should I break each of the toolboxes into its own source package too? (I'll have to do some silly templating or something, because there's hundreds of them. :/)

    Read the article

  • Support our movement?

    - by Mirchi Sid
    | Imagine Freedom 2013 | mnearth Student Programs This is to inform you about the world’s first online student competition called imagine freedom which is conducted by mnearth corporation India .The imagine freedom will be one of the most popular student IT competition in the world .The program will be fully functioned with free and open source software and operating system like Ubuntu Linux and Linux . The Competition have a lot of other categories like web designing , software development and much more .The program coordinates will contact your schools for the selection process and giving the first steps for registration . If you are an expert in Open source free software? Then it is the time for you ! Otherwise do you know anyone who have the skills ? Then inform them about the program . The competitions will be done as a part of Mnearth Student Programs . The program schedule and the local competition information will be send after getting the applications . The competitions are categorized into three . |Categories Of Participants Animation Films Multimedia Presentation 3D Animation Web Designing Software Development Innovations Cloud Apps Games etc. . . . . . |Levels Of Participants High School Level Higher Secondary Level Collage Level University Level |High School level This levels is for the students who is students . It’s age limit is 12 - 24 years . The competitions will be started on this year for selecting the good students who have the talent . For more information Send to : [email protected] Call us on :04936312206 (india) Join with the Community on facebook : follow us on twitter : www.twitter.com/imaginatingkids www.facebook.com/imaginefreedomonce

    Read the article

  • I need some MySQL lookup table advice

    - by Gary Beam
    I have a MySQL database with about 200 tables. 50 of these are small 2-field 'id-data' lookup tables. Several of these DB's are hosted on a shared server. I have been informed that I need to reduce the total number of tables in the shared hosting environment because of performance issues relating to too many tables. My question is: Could/Should the 50 2-Field lookup tables be combined into a single 3-field table with 'id-field_name-data' Fields? Even if this can be done, I will have a lot of work to do on the PHP user application. My other choice is moving the DB's to a dedicated server at much higher hosting cost. I don't believe my 200 table DB's are actually causing any performance issues on this shared hosting server, at least not from the user application standpoint. There are never more than 10 of these tables joined in any single query; although I have seen some very-slow queries generated by phpmyadmin on these DB's.

    Read the article

  • Distributed website server redundancy

    - by Keith Lion
    Assume a website infrastructure is very complicated and is fully distributed (probably like most large web companies). Am I right in thinking that although there are all these extra web servers to handle multiple client requests, there is still a single "machine" whereby users must enter? I am guessing this machine will be the one physically associated to the IP address? I ask because I need to know whether, in places where distributed systems exist, there is still a single point of failure- usually the control node or, in this example, the machine connected to the public internet? Surely there cannot be two machines connected to the internet, as they would have to have different IP addresses? This "machine" may not be a server per se, but maybe it is a piece of cisco equipment. I just need to know whether, in the real world, these distributed systems still have a particular section where they depend on the integrity of one electronic device?

    Read the article

< Previous Page | 387 388 389 390 391 392 393 394 395 396 397 398  | Next Page >