Search Results

Search found 23613 results on 945 pages for 'query parameters'.

Page 337/945 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • SQL Management Studio is painfully slow on 32-bit Windows 7

    - by Sergei
    I've been having issues running anything in SQL Management Studio on Win 7. Basically, doing anything through the Management Studio interfaces completely freezes it up for a few minutes. Running a query is nearly impossible because it takes nearly 2 minutes just for the IDE to parse it and another minute to run it when the query itself completes instantaneously outside of the IDE. I'm not even going to go into the query designer. Anything with heavy user interaction such as editing a row in the result set where i have to click a cell freezes up the front-end. I tried reinstalling to no avail. Also tried running in compatibility mode without any difference whatsoever. Anybody had a similar experience? I'm running SQL Management Studio 2008 version 10.0.2531.0 on 32-bit Windows 7. Connecting to a remote SQL Server instance (2008 R2). Thanks.

    Read the article

  • Plan variable and call dependencies

    - by Gerenuk
    I'd like to write down the design of my program to understand the dependencies and calls better. I know there are class diagrams which show inheritance and attribute variables. However I'd also like to document the input parameters to method functions and in particular which calls the methods function executes inside (e.g. on the input parameters). Also sometimes it might be useful to show how actual objects are connected (if there is a standard structure). This way I can have a better understanding of the modules and design before starting to program. Can you suggest a method to do this software design? It should be one-to-one to programming code structure so that I really notice all quirks beforehand (instead of high-level design where thing are hard to implement without further work). Maybe some special diagram or tool or a combination? It is static dependency and call design rather than time dependent execution monitoring. (I use Python if you have any specialized recommendations).

    Read the article

  • SearchServer2008Express Search Webservice

    - by Mike Koerner
    I was working on calling the Search Server 2008 Express search webservice from Powershell.  I kept getting <ResponsePacket xmlns="urn:Microsoft.Search.Response"><Response domain=""><Status>ERROR_NO_RESPONSE</Status><DebugErrorMessage>The search request was unable to connect to the Search Service.</DebugErrorMessage></Response></ResponsePacket>I checked the user authorization, the webservice search status, even the WSDL.  Turns out the URL for the SearchServer2008 search webservice was incorrect.  I was calling $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"and it should have been$URI= "http://ss2008/_vti_bin/search.asmx?WSDL"Here is my sample powershell script:# WSS Documentation http://msdn.microsoft.com/en-us/library/bb862916.aspx$error.clear()#Bad SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/spsearch.asmx?WSDL"#Good SearchServer2008Express Search URL $URI= "http://ss2008/_vti_bin/search.asmx?WSDL"$search = New-WebServiceProxy -uri $URI -namespace WSS -class Search -UseDefaultCredential $queryXml = "<QueryPacket Revision='1000'>  <Query >    <SupportedFormats>      <Format revision='1'>urn:Microsoft.Search.Response.Document.Document</Format>    </SupportedFormats>    <Context>      <QueryText language='en-US' type='MSSQLFT'>SELECT Title, Path, Description, Write, Rank, Size FROM Scope() WHERE CONTAINS('Microsoft')</QueryText>      <!--<QueryText language='en-US' type='TEXT'>Microsoft</QueryText> -->    </Context>  </Query></QueryPacket>" $statusResponse = $search.Status()write-host '$statusResponse:'  $statusResponse $GetPortalSearchInfo = $search.GetPortalSearchInfo()write-host '$GetPortalSearchInfo:'  $GetPortalSearchInfo $queryResult = $search.Query($queryXml)write-host '$queryResult:'  $queryResult

    Read the article

  • System user authentication via web interface [closed]

    - by donodarazao
    Background: We have one pretty slow and expensive satellite Internet connection that is shared in a network with 5-50 users. To limit traffic, users shall pay a certain sum of money per hour. Routing and traffic accounting on user basis is done by a opensuse 10.3 server. Login is done via pppoe, and for each connection, username, bytes_sent, bytes_rcvd, start_time, end_time,etc are written into a mysql database. Now it was decided that we want to change from time-based to volume-based pricing. As the original developer who installed the system a couple of years ago isn't available, I'm trying to do the changes. Although I'm absolutely new to all this, there is some progress. However, there's one point I'm absolutely stuck. Up to now, only administrators can access connection details and billing information via a web interface. But as volume-based prices are less transparent to users than time-based prices, it is essential that users themselves can check their connections and how much they cost via the web interface. For this, we need some kind of user authentication. Actual question: How to develop such a user authentication? Every user has a linux system user account. With this user name and password, connection to the pppoe-server is made by the client machines. I thought about two possibles ways to authenticate users: First possibility: Users type username and password in a form. This is then somehow checked. We already have to possibilities to change passwords via the web interface. Here are parts of the code: Part of the Perl script the homepage is linked to: #!/usr/bin/perl use CGI; use CGI::Carp qw(fatalsToBrowser); use lib '../lib'; use own_perl_module; my @error; my $data; $query = new CGI; $username = $query->param('username') || ''; $oldpasswd = $query->param('oldpasswd') || ''; $passwd = $query->param('passwd') || ''; $passwd2 = $query->param('passwd2') || ''; own_perl_module::connect(); if ($query->param('submit')) { my $benutzer = own_perl_module::select_benutzer(username => $username) or push @error, "user not exists"; push @error, "your password?!?" unless $passwd; unless (@error) { own_perl_module::update_benutzer($benutzer->{id}, { oldpasswd => $oldpasswd, passwd => $passwd, passwd2 => $passwd2 }, error => \@error) and push @error, "Password changed."; } } Here's part of the sub update_benutzer in the own_perl_module: if ($dat-{passwd} ne '') { my $username = $dat-{username} || $select-{username}; my $system = "./chpasswd.pl '$username' '$dat-{passwd}'" . (defined($dat-{oldpasswd}) ? " '$dat-{oldpasswd}'" : undef); my $answer = $system; if ($? != 0) { chomp($answer); push @$error, $answer || "error changing password ($?)"; Here's chpasswd.pl: #!/usr/bin/perl use FileHandle; use IPC::Open3; local $username = shift; local $passwd = shift; local $oldpasswd = shift; local $chat = { 'Old Password: $' => sub { print POUT "$oldpasswd\n"; }, 'New password: $' => sub { print POUT "$passwd\n"; }, 'Re-enter new password: $' => sub { print POUT "$passwd\n"; }, '(.*)\n$' => sub { print "$1\n"; exit 1; } }; local $/ = \1; my $command; if (defined($oldpasswd)) { $command = "sudo -u '$username' /usr/bin/passwd"; } else { $command = "sudo /usr/bin/passwd '$username'"; } $pid = open3(\*POUT, \*PIN, \*PERR, $command) or die; my $buffer; LOOP: while($_ = <PERR>) { $buffer .= $_; foreach (keys(%$chat)) { if ($buffer =~ /$_/i) { $buffer = undef; &{$chat->{$_}}; } } } exit; Could this somehow be adjusted to verify users, but not changing user passwords? The second possibility I see: all pppoe connections are logged in the mysql database. If I could somehow retrieve the username (or uid) of the user connected by pppoe, this could be used to authenticate users. Users could only check their internet connections and costs when they are online (and thus paying money), but this could be tolerated. Here's a line of the script that inserts connections into the database: my $username = $ENV{PEERNAME}; I thought it would be easy to use this variable, but $username seems to be always empty in test-scripts (print $username). Any idea how to retrieve the user connected to the pppoe server? Sorry for the long question! Any help would be very much appreciated. :)

    Read the article

  • OpenGL ES, orthopgraphics projection and viewport

    - by DarkDeny
    I want to make some simple 2D game on iOS to familiarize myself with OpenGL ES. I started with Ray Wenderlich tutorial (How To Create A Simple 2D iPhone Game with OpenGL ES 2.0 and GLKit). That tutorial is quite good, but I miss some parts of a puzzle. Ray creates orthographic projection using some magic numbers like 480 and 320. It is not clear to me why did he take these numbers, and as far as I can see - sprite is not mapped to the ipad simulator screen one-to-one pixel. I tried to play with parameters with which ortho matrix is created, but I cannot figure out what math is here. How can I calculate numbers (bottom, top, left, right, close, far) which will be parameters to orthographic projection matrix creation and have sprite on the screen shown in its original size?

    Read the article

  • Updated copy of the OBIEE Tuning whitepaper

    - by inowodwo
    The Product Assurance team have released an updated copy of the OBIEE Tuning Whitepaper. You can find it on the PA blog https://blogs.oracle.com/pa/entry/test or via Support note OBIEE 11g Infrastructure Performance Tuning Guide (Doc ID 1333049.1) https://support.us.oracle.com/oip/faces/secure/km/DocumentDisplay.jspx?id=1333049.1&recomm=Y This new revised document contains following useful tuning items: 1.    New improved HTTP Server caching algorithm. 2.    Oracle iPlanet Web Server tuning parameters. 3.    New tuning parameters settings / values for OPIS/OBIS components.

    Read the article

  • Sql Server 2008 Create Foreign Key Manually

    - by tgriffiths
    I have inherited an old database which wasn't designed very well. It is a Sql Server 2008 database which is missing quite a lot of Foreign Key relationships. Below shows two of the tables, and I am trying to manually create a FK relationship between dbo.app_status.status_id and dbo.app_additional_info.application_id I am using SQL Server Management Studio when trying to create the relationship using the query below USE myDatabase; GO ALTER TABLE dbo.app_additional_info ADD CONSTRAINT FK_AddInfo_AppStatus FOREIGN KEY (application_id) REFERENCES dbo.app_status (status_id) ON DELETE CASCADE ON UPDATE CASCADE ; GO However, I receive this error when I run the query The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK_AddInfo_AppStatus". The conflict occurred in database "myDatabase", table "dbo.app_status", column 'status_id'. I am wondering if the query is failing because each table already contains approximately 130,000 records? Please help. Thanks.

    Read the article

  • Documenting sp_ssiscatalog

    - by jamiet
    What is the best way to document an API? Moreover, what is the best way to document a T-SQL API? Before I try to answer those questions I should explain what I mean by “a T-SQL API”. I think of an API as being a collection of well-defined, known, code modules that provide some notion of a service to whomever uses it; in T-SQL terms I tend to think of a collection of stored procedures and functions as a form of API. Its a loose definition, I admit, and in SQL Server circles we don’t tend to think of stored procedures collectively as an API but if you think about it that’s exactly what they are. The question of how to document a T-SQL API came to my mind as I worked on sp_ssiscatalog. How could I make it easy for people to learn about the capabilities of sp_ssiscatalog without forcing them to dig through the code and find out for themselves? My opening gambit was to write documentation pages on the wiki at http://ssisreportingpack.codeplex.com. That’s kinda useful but it does suffer the disadvantage that someone using sp_ssiscatalog needs to go visit a webpage to read it – I want the documentation to be available wherever the user is using sp_ssiscatalog. Moreover, maintaining the wiki is a real PITA. Intellisense works up to a point, I guess: but that only shows whatever SQL Server knows about the various parameters, which isn’t all that much! I wanted a better way for my API users to learn about its capabilities and so I hit upon the idea of simply using PRINT statements within the code itself to inform the user what options are available; hence I added such PRINT statements in the latest check-in. Now when you execute (for example): EXEC sp_ssiscatalog @operation_type='execs' you can hit F6 a few times to view the messages pane and you shall see something like this: Notice that I’m returning information about all the parameters that can be used to affect the results that just got returned. I really do think this will be very useful to anyone using sp_ssiscatalog; I myself am always forgetting what the parameters are and I wrote the damn thing so I can’t really expect anyone else to remember them. I have not yet made available a release that has these changes in it but when I do I’ll blog about it right here. At the time of writing the latest available release of sp_ssiscatalog is DB v1.0.1.0 but if you want to the latest and greatest simply download it straight from source. Feedback is welcome as always. @Jamiet

    Read the article

  • MySQL com_select?

    - by symcbean
    I'm looking to tune my query cache a bit. According to 7.6.3.4. Query Cache Status and Maintenance in the manual: The Com_select value is given by this formula: Qcache_inserts + Qcache_not_cached + queries with errors found during the column-privileges check However in 5.1.5. Server Status Variables it suggests that this is maintained by the DBMS. Having said that mysql> show status like 'Com_select%'; Always returns a value of 1 - and I'm pretty sure I've run more than one non-cached select query on my database since it started. It looks as if other people are similarly confused. Is this status variable redundant? Which bit of the manual is wrong? TIA

    Read the article

  • Configure Oracle Identity Manager AD/LDAP Authentication

    - by Arda Eralp
    Requirements (on AD side) LDAP connection user with the necessary rights in AD to do subtree searches on your users and groups container, respectively in the scope we configure below For LDAP in OIM to work, you need an AD Group called "oimusers", in which all users who shall be able to login to OIM need to be member. The group need to be named exactly "oimusers". Step 1: Login Weblogic Administration Console  Step 2: Create New Provider Authentication Provider Name: ADAuthenticationProvider Type: ActiveDirectoryAuthenticator Control Flag: SUFFICIENT   User scope configuration User Base DN: Container where your users are found Rest of the parameters stay default   Group scope configuration Group Base DN: Container where your groups are found Your "oimusers" group must be found in this container or in the subtree Rest of the parameters stay default  Step 3: Restart Admin Server Step 4: Check oimusers group Step 5: Re order providers Step 6: Restart Admin Server

    Read the article

  • fastcgi-mono-server with Nginx is much slower than xsp4

    - by marxin
    We started testing our MVC4 app on xsp4 server compiled with mono-3.0.3, speed was enough and we decided to set up production fastcgi-mono-server4 (version 2.11.0.0) with nginx (1.2.6-r1). Single query that loads some JSON query took ~200ms on XSP4, but Nginx serves the query in about 1.2s and I am wondering where could be such a slow down? I followed nginx configuration: http://www.mono-project.com/FastCGI_Nginx and fastcgi-mono-server4 uses socket for listening nginx. Do you have any ideas how to log some time stamp which will help me? Thanks

    Read the article

  • Interaction of a GUI-based App and Windows Service

    - by psubsee2003
    I am working on personal project that will be designed to help manage my media library, specifically recordings created by Windows Media Center. So I am going to have the following parts to this application: A Windows Service that monitors the recording folder. Once a new recording is completed that meets specific criteria, it will call several 3rd party CLI Applications to remove the commercials and re-encode the video into a more hard-drive friendly format. A controller GUI to be able to modify settings of the service, specifically add new shows to watch for, and to modify parameters for the CLI Applications A standalone (GUI-based) desktop application that can perform many of the same functions as the windows service, expect manually on specific files instead of automatically based on specific criteria. (It should be mentioned that I have limited experience with an application of this complexity, and I have absolutely zero experience with Windows Services) Since the 1st and 3rd bullet share similar functionality, my design plan is to pull the common functionality into a separate library shared by both parts applications, but these 2 components do not need to interact otherwise. The 2nd and 3rd bullets seem to share some common functionality, both will have a GUI, both will have to help define similar parameters (one to send to the service and the other to send directly to the CLI applications), so I can see some advantage to combining them into the same application. On the other hand, the standalone application (bullet #3) really does not need to interact with the service at all, except for possibly sharing a few common default parameters that can easily be put into an XML in a common location, so it seems to make more sense to just keep everything separate. The controller GUI (2nd bullet) is where I am stuck at the moment. Do I just roll this functionality (allow for user interaction with the service to update settings and criteria) into the standalone application? Or would it be a better design decision to keep them separate? Specifically, I'm worried about adding the complexity of communicating with the Windows Service to the standalone application when it doesn't need it. Is WCF the right approach to allow the controller GUI to interact with the Windows Service? Or is there a better alternative? At the moment, I don't envision a need for a significant amount of interaction, maybe just adding a new task once in a while and occasionally tweaking a parameter, but when something is changed, I do expect the windows service to immediately use the new settings.

    Read the article

  • After 14.04 update video/totem won't play midi

    - by bruce
    Prior to the upgrade, I could be on wikipedia and play the 'play' links fine where totem would open a window and have screen graphics go on. Now, after researching, installing VLC and it's extensions, making sure the gnome codec installer is activated, and on and on, all I get is : "The parameters passed to the application had an invalid format. Please file a bug! The parameters were: --transient-for=16777296 gstreamer|1.0|totem-plugin-viewer|audio/x-midi-event decoder|decoder-audio/x-midi-event" When totem/video opens and I'm not sure whether the bug is being reported or not. Meaning I don't know if APPOrt is active for this as there's no box with a checkmark in it display. AND the window for totem/video ALWAYS has the sound muted when it opens.

    Read the article

  • How to generate SPMetal for a specific list (OOTB: like tasks or contacts) with custom columns

    - by KunaalKapoor
    SPMetal is used to make use of LINQ on a list in SharePoint 2010. By default when you generate SPMetal on a site you will get a code generated file for most of the lists and probably more. Here is a MSDN link for some info on SPMetal.http://msdn.microsoft.com/en-us/library/ee538255(office.14).aspxBut what if you want only to generate the code for one list?Well it is quite simple once you figure it out. You need to add an xml file to override the default settings of SPMetal and specify it in the /parameters option. I will show you how to do this.First create a Folder that will contain two files (GenerateSPMetalCode.bat and SPMetal.xml).Below is the content of the files:GenerateSPMetalCode.bat "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\BIN\SPMetal" /web:http://YourServer /code:OutPutFileName.cs /language:csharp /parameters:SPMetal.xml pause SPMetal.xml <?xml version="1.0" encoding="utf-8"?> <Web AccessModifier="Internal" xmlns="http://schemas.microsoft.com/SharePoint/2009/spmetal"> <List Name="ListName"> <ContentType Name="ContentTypeName" Class="GeneratedClassName" /> </List> <ExcludeOtherLists></ExcludeOtherLists> </Web> You will have to change some of the text in the files so that it will be specific to your SharePoint Server Setup. In the bat file you will have to change http://YourServer to the url of the web where your list is. In the SPMetal.xml file you need to change ListName to the name of your list and the ContentTypeName to the name of the content type you want to extract. The GeneratedClassName can be anything but perhaps you should rename it to something more sensible.Adding the following line: '<List Name="ListName"><ContentType Name="ContentTypeName" Class="GeneratedClassName" /> </List>'  makes sure that any custom columns added to an OOTB list like contacts or tasks are also generated, which are missed out in a regular generation.So now when you run it the SPMetal command will read the SPMetal.xml list and override its commands. ExcludeOtherLists element makes it so that only the code for the lists you specify will be generated. For some reason I got an error if I had this element above the List element.You sould now have a code file called OutPutFileName.cs that has been generated. You can now put this in your SharePoint project for use with your LINQ queries against that list.I will soon write a LINQ example that uses the generated class. UPDATE: Add the /namespace parameter to add a namespace to the generated code. "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\BIN\SPMetal" /web:http://YourServer /namespace:MySPMetalNameSpace /code:OutPutFileName.cs /language:csharp /parameters:SPMetal.xml

    Read the article

  • MySQL Workbench 5.2.39 GA Released

    - by user13164789
    The MySQL Developer Tools team is announcing the next maintenance release of its flagship product, MySQL Workbench, version 5.2.39. This version contains MySQL Utilities 1.0.5, a set of command line Python utilities for helping to perform and script various administration tasks for MySQL. A complete list of changes in this release of the Utilities can be found at:http://dev.mysql.com/doc/workbench/en/wb-utils-news-1-0-5.html MySQL Workbench 5.2 GA • Data Modeling • Query (replaces the old MySQL Query Browser) • Administration (replaces the old MySQL Administrator) Please get your copy from our Download site. Sources and binary packages are available for several platforms, including Windows, Mac OS X and Linux. http://dev.mysql.com/downloads/workbench/ Workbench Documentation can be found here. http://dev.mysql.com/doc/workbench/en/index.html Utilities Documentation can be found here.http://dev.mysql.com/doc/workbench/en/mysql-utilities.html In addition to the new Query/SQL Development and Administration modules, version 5.2 features improved stability and performance – especially in Windows, where OpenGL support has been enhanced and the UI was optimized to offer better responsiveness. This release also includes improvements to the scripting capabilities of the SQL Editor. You can read more about it in http://wb.mysql.com/workbench/doc/ For a detailed list of resolved issues, see the change log. http://dev.mysql.com/doc/workbench/en/wb-change-history.html If you need any additional info or help please get in touch with us. Post in our forums or leave comments on our blog pages. - The MySQL Workbench Team

    Read the article

  • MVC pattern synchronisation

    - by Hariprasad
    I am facing a problem in synchronizing my model and view threads I have a view which is table. In it, user can select a few rows. I update the view as soon as the user clicks on any row since I don't want the UI to be slow. This updating is done by a logic which runs in the controller thread below. At the same time, the controller will update the model data too, which takes place in a different thread. i.e., controller puts the query in a queue, which is then executed by the model thread - which is a single-threaded interface. As soon as the query executes, controller will get a signal. Now, In order to keep the view and model synchronized, I will update the view again based on the return value of the query (the data returned by model) - even though I updated the view already for that user action. But, I am facing issues because, its taking a lot of time for the model to return the result, by that time user would have performed multiple clicks. So, as a result of updating the view again based on the information from model, the view sometimes goes back to the state in which the previous clicks were made (Suppose user clicks thrice on different rows. I update the view as soon as the click happens. Also, I update the view when I get data back from the model - which is supposed to be same as the already updated state of the view. Now, when the user clicks third time, I get data for the first click from model. As a result, view goes back to a state which is generated by the first click) Is there any way to handle such a synchronization issue?

    Read the article

  • Finding Locked Out Users

    - by Bart Silverstrim
    Active Directory up to 2008 network (our servers are a mix of 2008, 2003...) I'm looking for a quick way to query AD to find out what users are locked out, preferably from a batch or script file, to monitor for possible issues with either user accounts being attacked by an automated attack or just anomalies in the network. I've Googled and my Google-fu has failed; I found a query off Microsoft's own knowledgebase that cites a string to use on Server 2003 with the management snap-in's saved queries (http://support.microsoft.com/kb/555131) but when I entered it, the query returned 400 users that a spot-check showed did NOT have a checkmark in the "Account is locked out" box under "account." In fact, I don't see anything wrong with their accounts. Is there a simple utility (wisesoft bulkadusers apparently uses this method behind the scenes, since it's results were also wrong) that will give a count of users and possibly their user object names? Script? Something?

    Read the article

  • Create an array from mysql with column names and values [on hold]

    - by ScaZ
    i'm trying to create an array with PHP and MySQL, but i always get errors. The code i'm using function db_listar_usuarios(){ $link=db_connect(); $query = "select * from usuarios" or die("Problemas en el select: " . mysqli_error($link)); $result = $link->query($query); while($row = mysqli_fetch_assoc($result)) { $row['nombre'] . array(; foreach ($row as $col => $val) { $col => $val; } } } And what I want to create with this code is: array( 'john' => array('address' => 'st 123', 'age' => '25', 'surname' => 'doe'), 'ane' => array('address' => 'av 456', 'age'=> '32', 'surname' => 'smith'), ); To use then like something like this: private $contacts = db_listar_usuarios(); I use 2 files: functions.php and server.php server.php is a downloaded file example to do a REST API. Here are both of them. server.php - pastebin.com/5j54m1Mz functions.php - pastebin.com/N7jMhSBa Thank you in advance!

    Read the article

  • Using Queries with Coherence Write-Behind Caches

    - by jpurdy
    Applications that use write-behind caching and wish to query the logical entity set have the option of querying the NamedCache itself or querying the database. In the former case, no particular restrictions exist beyond the limitations intrinsic to the Coherence query engine itself. In the latter case, queries may see partially committed transactions (e.g. with a parent-child relationship, the version of the parent may be different than the version of the child objects) and/or significant version skew (the query may see the current version of one object and a far older version of another object). This is consistent with "read committed" semantics, but the read skew may be far greater than would ever occur in a non-cached environment. As is usually the case, the application developer may choose to accept these limitations (with the hope that they are sufficiently infrequent), or they may choose to validate the reads (perhaps via a version flag on the objects). This also applies to situations where a third party application (such as a reporting tool) is querying the database. In many cases, the database may only be in a consistent state after the Coherence cluster has been halted.

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • Google Currency Convertor JSON API

    - by Gopinath
    There are many live currency conversion services available on the web and the popular one’s among them are – Google, Yahoo, MSN & XE. Among all these four Google is the developer’s darling and it provides a simple JSON API that can be integrated in your applications.  http://www.google.com/ig/calculator?hl=en&q=1USD=?INR Using the API is very simple and it takes two parameters as input. The first parameter “hl” is the language code in which you want output. The second parameter “q” is the conversion query in the format <number><from currency code>=?<to currency code>. In the URL give above the query requests for conversion of 1 USD in INR. JSON output for the above query would be  similar to {lhs: "1 U.S. dollar",rhs: "54.4602984 Indian rupees",error: "",icc: true} Examples: 100 USD in INR  http://www.google.com/ig/calculator?hl=en&q=100USD=?INR Example 2: 1 GBP in INR http://www.google.com/ig/calculator?hl=en&q=1GBP=?INR Example 3: 1 USD in INR, output the data in French language http://www.google.com/ig/calculator?hl=fr&q=1USD=?INR   This is an undocumented service and expect changes at any time. But as long as it works, you got a programmatic way to convert currencies.

    Read the article

  • Cross domain LDAP

    - by Adam
    For a system we are developing we have 2 domains an internal and an external domain with bi directional trust between them. However the servers are only able to connect to their own DC's. We have an application server on the internal domain which needs to use an LDAP query to gather a list of users from a group on the external domain. How do i go about writing an LDAP query that asks one DC to go ask another DC for a list of users. I tried querying the internal DC with the same LDAP query I would use if it could hit the external DC directly but this does not work. When i use Softerra LDAP Administraor I can view the full hierarchy of the interal domain but despite the trust relationship between domains i am unable to see any of the external doamin. Any suggestions or help would be greatly appreciated

    Read the article

  • Design pattern: static function call with input/output containers?

    - by Pavlo Dyban
    I work for a company in software research department. We use algorithms from our real software and wrap them so that we can use them for prototyping. Every time an algorithm interface changes, we need to adapt our wrappers respectively. Recently all algorithms have been refactored in such a manner that instead of accepting many different inputs and returning outputs via referenced parameters, they now accept one input data container and one output data container (the latter is passed by reference). Algorithm interface is limited to a static function call like that: class MyAlgorithm{ static bool calculate(MyAlgorithmInput input, MyAlgorithmOutput &output); } This is actually a very powerful design, though I have never seen it in a C++ programming environment before. Changes in the number of parameters and their data types are now encapsulated and they don't change the algorithm callback. In the latest algorithm which I have developed I used the same scheme. Now I want to know if this is a popular design pattern and what it is called.

    Read the article

  • Information I need to know as a Java Developer [on hold]

    - by Woy
    I'm a java developer. I'm trying to get more knowledge to become a better programmer. I've listed a number of technologies to learn. Instead of what I've listed, what technologies would you suggest to learn as well for a Junior Java Developer? I realize, there's a lot of things to study. Java: - how a garbage collector works - resource management - network programming - TCP/IP HTTP - transactions, - consistency: interfaces, classes collections, hash codes, algorithms, comp. complexity concurrent programming: synchronizing, semafores steam management metability: thread-safety byte code manipulations, reflections, Aspect-Oriented Programming as base to understand frameworks such as Spring etc. Web stack: servlets, filters, socket programming Libraries: JDK, GWT, Apache Commons, Joda-Time, Dependency Injections: Spring, Nano Tools: IDE: very good knowledge - debugger - profiler - web analyzers: Wireshark, firebugs - unit testing SQL/Databases: Basics SELECTing columns from a table Aggregates Part 1: COUNT, SUM, MAX/MIN Aggregates Part 2: DISTINCT, GROUP BY, HAVING + Intermediate JOINs, ANSI-89 and ANSI-92 syntax + UNION vs UNION ALL x NULL handling: COALESCE & Native NULL handling Subqueries: IN, EXISTS, and inline views Subqueries: Correlated ITH syntax: Subquery Factoring/CTE Views Advanced Topics Functions, Stored Procedures, Packages Pivoting data: CASE & PIVOT syntax Hierarchical Queries Cursors: Implicit and Explicit Triggers Dynamic SQL Materialized Views Query Optimization: Indexes Query Optimization: Explain Plans Query Optimization: Profiling Data Modelling: Normal Forms, 1 through 3 Data Modelling: Primary & Foreign Keys Data Modelling: Table Constraints Data Modelling: Link/Corrollary Tables Full Text Searching XML Isolation Levels Entity Relationship Diagrams (ERDs), Logical and Physical Transactions: COMMIT, ROLLBACK, Error Handling

    Read the article

  • How should UI layer pass user input to BL layer?

    - by BornToCode
    I'm building an n-tier application, I have UI, BL, DAL & Entities (built from POCO) projects. (All projects have a reference to the Entities). My question is - how should I pass user input from the UI to the BL, as a bunch of strings passed to the BL method and the BL will build the object from the parameters, or should I build the objects inside the UI submit_function and send objects as parameters? EDIT: I wrote n-tier application, but what I actually meant was just layers.

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >