Search Results

Search found 6651 results on 267 pages for 'description'.

Page 44/267 | < Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >

  • Refactor This (Ugly Code)!

    - by Alois Kraus
    Ayende has put on his blog some ugly code to refactor. First and foremost it is nearly impossible to reason about other peoples code without knowing the driving forces behind the current code. It is certainly possible to make it much cleaner when potential sources of errors cannot happen in the first place due to good design. I can see what the intention of the code is but I do not know about every brittle detail if I am allowed to reorder things here and there to simplify things. So I decided to make it much simpler by identifying the different responsibilities of the methods and encapsulate it in different classes. The code we need to refactor seems to deal with a handler after a message has been sent to a message queue. The handler does complete the current transaction if there is any and does handle any errors happening there. If during the the completion of the transaction errors occur the transaction is at least disposed. We can enter the handler already in a faulty state where we try to deliver the complete event in any case and signal a failure event and try to resend the message again to the queue if it was not inside a transaction. All is decorated with many try/catch blocks, duplicated code and some state variables to route the program flow. It is hard to understand and difficult to reason about. In other words: This code is a mess and could be written by me if I was under pressure. Here comes to code we want to refactor:         private void HandleMessageCompletion(                                      Message message,                                      TransactionScope tx,                                      OpenedQueue messageQueue,                                      Exception exception,                                      Action<CurrentMessageInformation, Exception> messageCompleted,                                      Action<CurrentMessageInformation> beforeTransactionCommit)         {             var txDisposed = false;             if (exception == null)             {                 try                 {                     if (tx != null)                     {                         if (beforeTransactionCommit != null)                             beforeTransactionCommit(currentMessageInformation);                         tx.Complete();                         tx.Dispose();                         txDisposed = true;                     }                     try                     {                         if (messageCompleted != null)                             messageCompleted(currentMessageInformation, exception);                     }                     catch (Exception e)                     {                         Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);                     }                     return;                 }                 catch (Exception e)                 {                     Trace.TraceWarning("Failed to complete transaction, moving to error mode"+ e);                     exception = e;                 }             }             try             {                 if (txDisposed == false && tx != null)                 {                     Trace.TraceWarning("Disposing transaction in error mode");                     tx.Dispose();                 }             }             catch (Exception e)             {                 Trace.TraceWarning("Failed to dispose of transaction in error mode."+ e);             }             if (message == null)                 return;                 try             {                 if (messageCompleted != null)                     messageCompleted(currentMessageInformation, exception);             }             catch (Exception e)             {                 Trace.TraceError("An error occured when raising the MessageCompleted event, the error will NOT affect the message processing"+ e);             }               try             {                 var copy = MessageProcessingFailure;                 if (copy != null)                     copy(currentMessageInformation, exception);             }             catch (Exception moduleException)             {                 Trace.TraceError("Module failed to process message failure: " + exception.Message+                                              moduleException);             }               if (messageQueue.IsTransactional == false)// put the item back in the queue             {                 messageQueue.Send(message);             }         }     You can see quite some processing and handling going on there. Yes this looks like real world code one did put together to make things work and he does not trust his callbacks. I guess these are event handlers which are optional and the delegates were extracted from an event to call them back later when necessary.  Lets see what the author of this code did intend:          private void HandleMessageCompletion(             TransactionHandler transactionHandler,             MessageCompletionHandler handler,             CurrentMessageInformation messageInfo,             ErrorCollector errors             )         {               // commit current pending transaction             transactionHandler.CallHandlerAndCommit(messageInfo, errors);               // We have an error for a null message do not send completion event             if (messageInfo.CurrentMessage == null)                 return;               // Send completion event in any case regardless of errors             handler.OnMessageCompleted(messageInfo, errors);               // put message back if queue is not transactional             transactionHandler.ResendMessageOnError(messageInfo.CurrentMessage, errors);         }   I did not bother to write the intention here again since the code should be pretty self explaining by now. I have used comments to explain the still nontrivial procedure step by step revealing the real intention about all this complex program flow. The original complexity of the problem domain does not go away but by applying the techniques of SRP (Single Responsibility Principle) and some functional style but we can abstract the necessary complexity away in useful abstractions which make it much easier to reason about it. Since most of the method seems to deal with errors I thought it was a good idea to encapsulate the error state of our current message in an ErrorCollector object which stores all exceptions in a list along with a description what the error all was about in the exception itself. We can log it later or not depending on the log level or whatever. It is really just a simple list that encapsulates the current error state.          class ErrorCollector          {              List<Exception> _Errors = new List<Exception>();                public void Add(Exception ex, string description)              {                  ex.Data["Description"] = description;                  _Errors.Add(ex);              }                public Exception Last              {                  get                  {                      return _Errors.LastOrDefault();                  }              }                public bool HasError              {                  get                  {                      return _Errors.Count > 0;                  }              }          }   Since the error state is global we have two choices to store a reference in the other helper objects (TransactionHandler and MessageCompletionHandler)or pass it to the method calls when necessary. I did chose the latter one because a second argument does not hurt and makes it easier to reason about the overall state while the helper objects remain stateless and immutable which makes the helper objects much easier to understand and as a bonus thread safe as well. This does not mean that the stored member variables are stateless or thread safe as well but at least our helper classes are it. Most of the complexity is located the transaction handling I consider as a separate responsibility that I delegate to the TransactionHandler which does nothing if there is no transaction or Call the Before Commit Handler Commit Transaction Dispose Transaction if commit did throw In fact it has a second responsibility to resend the message if the transaction did fail. I did see a good fit there since it deals with transaction failures.          class TransactionHandler          {              TransactionScope _Tx;              Action<CurrentMessageInformation> _BeforeCommit;              OpenedQueue _MessageQueue;                public TransactionHandler(TransactionScope tx, Action<CurrentMessageInformation> beforeCommit, OpenedQueue messageQueue)              {                  _Tx = tx;                  _BeforeCommit = beforeCommit;                  _MessageQueue = messageQueue;              }                public void CallHandlerAndCommit(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  if (_Tx != null && !errors.HasError)                  {                      try                      {                          if (_BeforeCommit != null)                          {                              _BeforeCommit(currentMessageInfo);                          }                            _Tx.Complete();                          _Tx.Dispose();                      }                      catch (Exception ex)                      {                          errors.Add(ex, "Failed to complete transaction, moving to error mode");                          Trace.TraceWarning("Disposing transaction in error mode");                          try                          {                              _Tx.Dispose();                          }                          catch (Exception ex2)                          {                              errors.Add(ex2, "Failed to dispose of transaction in error mode.");                          }                      }                  }              }                public void ResendMessageOnError(Message message, ErrorCollector errors)              {                  if (errors.HasError && !_MessageQueue.IsTransactional)                  {                      _MessageQueue.Send(message);                  }              }          } If we need to change the handling in the future we have a much easier time to reason about our application flow than before. After we did complete our transaction and called our callback we can call the completion handler which is the main purpose of the HandleMessageCompletion method after all. The responsiblity o the MessageCompletionHandler is to call the completion callback and the failure callback when some error has occurred.            class MessageCompletionHandler          {              Action<CurrentMessageInformation, Exception> _MessageCompletedHandler;              Action<CurrentMessageInformation, Exception> _MessageProcessingFailure;                public MessageCompletionHandler(Action<CurrentMessageInformation, Exception> messageCompletedHandler,                                              Action<CurrentMessageInformation, Exception> messageProcessingFailure)              {                  _MessageCompletedHandler = messageCompletedHandler;                  _MessageProcessingFailure = messageProcessingFailure;              }                  public void OnMessageCompleted(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageCompletedHandler != null)                      {                          _MessageCompletedHandler(currentMessageInfo, errors.Last);                      }                  }                  catch (Exception ex)                  {                      errors.Add(ex, "An error occured when raising the MessageCompleted event, the error will NOT affect the message processing");                  }                    if (errors.HasError)                  {                      SignalFailedMessage(currentMessageInfo, errors);                  }              }                void SignalFailedMessage(CurrentMessageInformation currentMessageInfo, ErrorCollector errors)              {                  try                  {                      if (_MessageProcessingFailure != null)                          _MessageProcessingFailure(currentMessageInfo, errors.Last);                  }                  catch (Exception moduleException)                  {                      errors.Add(moduleException, "Module failed to process message failure");                  }              }            }   If for some reason I did screw up the logic and we need to call the completion handler from our Transaction handler we can simple add to the CallHandlerAndCommit method a third argument to the MessageCompletionHandler and we are fine again. If the logic becomes even more complex and we need to ensure that the completed event is triggered only once we have now one place the completion handler to capture the state. During this refactoring I simple put things together that belong together and came up with useful abstractions. If you look at the original argument list of the HandleMessageCompletion method I have put many things together:   Original Arguments New Arguments Encapsulate Message message CurrentMessageInformation messageInfo         Message message TransactionScope tx Action<CurrentMessageInformation> beforeTransactionCommit OpenedQueue messageQueue TransactionHandler transactionHandler        TransactionScope tx        OpenedQueue messageQueue        Action<CurrentMessageInformation> beforeTransactionCommit Exception exception,             ErrorCollector errors Action<CurrentMessageInformation, Exception> messageCompleted MessageCompletionHandler handler          Action<CurrentMessageInformation, Exception> messageCompleted          Action<CurrentMessageInformation, Exception> messageProcessingFailure The reason is simple: Put the things that have relationships together and you will find nearly automatically useful abstractions. I hope this makes sense to you. If you see a way to make it even more simple you can show Ayende your improved version as well.

    Read the article

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • My software is hosted on a "bad" website. Can I do anything about it?

    - by Abluescarab
    The software I've created is hosted on what you could call a "bad" website. It's hard to explain, so I'll just provide an example. I've made a free password generator. This, along with most of my other FREE software, is available on this website. This is their description of my software: Platform: 7/7 x64/Windows 2K/XP/2003/Vista Size: 61.6 Mb License: Trial File Type: .7z Last Updated: June 4th, 2011, 15:38 UTC Avarage Download Speed: 6226 Kb/s Last Week Downloads: 476 Toatal Downloads: 24908 Not only is the size completely skewed, it is not trial software, it's free software. The thing is that it's not the description I'm worried about--it's the download links. The website is a scam website. They apparently link to "cracks" and "keygens", but not only is that in itself illegal, they actually link to fake download websites that give you viruses and charge your credit card. Just to list things that are wrong with this website: they claim all software is paid software then offer downloads for keygens and cracks; they fake all details about the program and any program reviews and ratings; they and the downloads site they link to are probably run by the same person, so they make money off of these lies. I'm only a teenager with no means to pursue legal action. This means that, unfortunately, I can't do anything that will actually get results. I'd like my software to only be downloaded off my personal website. I have links to four legitimate locations to download my software and that's it. Essentially, is there anything I can do about this? As I said above, I can't pursue legal action, but is there some way I can discourage traffic to that website by blacklisting it or something? Can I make a claim on MY website to only download my software from the links I provide? Or should I just pay no mind? Because, honestly, it's a bit of a ways back in Google results. Thank you ahead of time.

    Read the article

  • "ldap_add: Naming violation (64)" error when configuring OpenLDAP

    - by user3215
    I am following the Ubuntu server guide to configure OpenLDAP on an Ubuntu 10.04 server, but can not get it to work. When I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file: # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Can anyone help me?

    Read the article

  • Couldn't find package - But package is listed in the Packages file

    - by Chris
    (Quoted items are redacted elements) I am using a private repository and an currently trying to repackage some packages 3rd-party packages. I extract the package, make a few modifications (just the control files to fit with company policy - though sometimes file install locations though not in this case) and repackage (and usually rename). Normally I copy the files into a new blank debhelper project and reconstruct the package, however, with a recent one I attempting to convert and some libraries and stuff aren't linking properly (I did copy the postinst, postrm, and preinst files along with all DEDIAN files exactly), the original package worked, but my repackage doesn't, despite providing the same files in the same locations and the same postinst and preinst. So I was attempting to just modify the current packages control files (as the original package is not very good and will not list in our repository and getting a better one from the 3rd party is not an option). I also renamed the package. I did the following: dpkg-deb -R "directory" Modify DEBIAN/control dpkg-deb -b "directory" "package name I want" I did this and put it in our repository. The package shows up in the "Packages" file on the repository and running apt-get update on the client side shows the package in: /var/lib/apt/lists/"server"_"location"_Packages However when I do an apt-get install on the package name (as listed in the Packages file - I did a copy paste) it says it can't find the package. Same with an apt-cache search The Packages listings is as follow (name redacted): Package: "package name" Priority: extra Section: unknown Maintainer: "maintainer" Architecture: any Version: 1.0-lucid5 Depends: libc Filename: "directory"/"package_filename" Size: 2206292 MD5sum: "md5sum" SHA1: "sha key" SHA256: "sha256 key" Description: "description" I am running as sudo (and tried as root as well). I don't understand why apt-get won't see the package. Can you point out any flaws in what I have done, or perhaps some help on getting apt-get to properly see the package. Or perhaps an alternative. I am not even sure if this is a valid way to repackage something. Thanks.

    Read the article

  • SQL SERVER – Four Tutorial for SQL Server 2012 New Features

    - by pinaldave
    One of the very common question I receive on my facebook is that if there is any tutorial for SQL Server 2012 new enhanced features and solutions. I see this demand a bit increasing as the SQL Server 2012 is more and more being adopted. Here is the list of four tutorial which is specifically created for SQL Server 2012 by Microsoft. Multidimensional Modeling (Adventure Works Tutorial) This tutorial teaches you how to develop and deploy an Analysis Services project that enables the employees of Adventure Works Cycles to analyze various aspects of their business. Tabular Modeling (Adventure Works Tutorial) This tutorial teaches you how to create a SQL Server 2012 Analysis Services tabular model that enable sales and marketing teams to easily analyze internet sales data in the AdventureWorksDW2012 data warehouse. You will build the tabular model in SQL Server Data Tools. Tutorials and Demos for Power View Create Power View reports and explore Power View features. View demos, videos, and tutorials that help you get started quickly with Power View and successfully build reports with interactive filters and visualizations such as bubble charts, tiles, and cards. Tutorial: Using the hierarchyid Data Type This tutorial is intended for users who are experienced with Transact-SQL, but are new to the hierarchyid data type. In this tutorial, you convert an existing table to a hierarchical structure, and you also create a new table to store and manage hierarchical data efficiently. Note: The description of the course is taken from original course description. You will need to install SQL Server 2012 AdventureWorks for all this tutorial. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Training, T SQL, Technology

    Read the article

  • wireless detected but Can't connect to network, Wifi card intel pro wireless N100 BGN

    - by Alexandre777
    Thanks in advance, I installed Linux Mint kernel 3.0.0-12 x64 ,wireless is detected and I configure it in network manager but can't coonect, there are the results of command line: $ iwconfig lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off $ sudo lshw -C network** *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:51:c9:d6 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-12-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:48 memory:d7400000-d7401fff *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Atheros Communications physical id: 0 bus info: pci@0000:06:00.0 logical name: eth0 version: c0 serial: 48:5b:39:99:8f:16 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI duplex=full firmware=N/A ip=192.168.1.2 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:51 memory:d3800000-d383ffff ioport:8000(size=128 $ rfkill list all 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no Thank you.

    Read the article

  • How to Configure OpenLDAP on Ubuntu 10.04 Server

    - by user3215
    I am following the Ubuntu server guide to configure OpenLDAP on an Ubuntu 10.04 server, but can not get it to work. When I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file: # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Can anyone help me?

    Read the article

  • How to configure ldap on ubuntu 10.04 server

    - by user3215
    I am following the link to configure ldap on ubuntu 10.04 server but could not. when I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Anybody could help me?

    Read the article

  • Cannot enable wireless on an Intel WifiLink 1000 on an Lenovo Ideapad z570

    - by Brij
    I am using the ubuntu 11.10 on lenovo ideapad z570. My wireless internet is not working. I have ensure that wireless switch is on. Windows 7, wireless works great.However ubuntu 11.10 is not allowing me to enable wireless connection. I have run the following command and here is the status. sudo lshw -class network *-network DISABLED description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 00 serial: 74:e5:0b:1c:a4:a4 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-12-generic firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:42 memory:d0500000-d0501fff *-network description: Ethernet interface product: RTL8101E/RTL8102E PCI Express Fast Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 05 serial: f0:de:f1:64:b6:62 size: 10Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=half firmware=rtl_nic/rtl8105e-1.fw latency=0 link=no multicast=yes port=MII speed=10Mbit/s resources: irq:41 ioport:2000(size=256) memory:d0404000-d0404fff memory:d0400000-d0403fff Here is rfkill list all output: rfkill list all 0: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: phy0: Wireless LAN Soft blocked: no Hard blocked: no 2: acer-wireless: Wireless LAN Soft blocked: yes Hard blocked: no Note : Windows 7, wireless card property shows that Intel WifiLink 1000 BGN. Could someone help me to fix this issue.

    Read the article

  • Connecting MS SQL using freetds and unixodbc: isql - no default driver specified

    - by Dejan
    I am trying to connect to the MS SQL database using freetds and unixodbc. I have read various guides how to do it, but no one works fine for me. When I try to connect to the database using isql tool, I get the following error: $ isql -v TS username password [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect Have anybody already successfully established the connection to the MS SQL database using freetds and unixodbc on Ubuntu 12.04? I would really appreciate some help. Below is the procedure I used to configure the freetds and unixodbc. Thanks for your help in advance! Procedure First, I have installed the following packages sudo apt-get unixodbc unixodbc-dev freetds-dev tdsodbc and configured freetds as follows: --- /etc/freetds/freetds.conf --- [TS] host = SERVER port = 1433 tds version = 7.0 client charset = UTF-8 Using tsql tool I can successfully connect to the database by executing tsql -S TS -U username -P password As I need an odbc connection I configured odbcinst.ini as follows: --- /etc/odbcinst.ini --- [FreeTDS] Description = FreeTDS Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so FileUsage = 1 CPTimeout = CPResuse = client charset = utf-8 and odbc.ini as follows: --- /etc/odbc.ini --- [TS] Description = "test" Driver = FreeTDS Servername = SERVER Server = SERVER Port = 1433 Database = DBNAME Trace = No Trying to connect to the database using isql tool with such a configuration results the following error: $ isql -v TS username password [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect

    Read the article

  • Software Architecture and Design vs Psychology of HCI class

    - by Joey Green
    I have two classes to choose from and I'm wanting to get an opinion from the more experienced game devs which might be better for someone who wants to be an indie game dev. The first is a Software Architecture and Design course and the second is a course titled Psychology of HCI. I've previously have taken a Software Design course that was focused only on design patterns. I've also taken an Introduction to HCI course. Software Architecture and Design Description Topics include software architectures, methodologies, model representations, component-based design ,patterns,frameworks, CASE-based desgins, and case studies. Psychology of HCI Description Exploration of psychological factors that interact with computer interface usablilty. Interface design techniques and usability evaluation methods are emphasized. I know I would find both interesting, but my concern is really which one might be easier to pick up on my own. I know HCI is relevant to game dev, but am un-sure if the topics in the Software Architecture class would be more for big software projects that go beyond the scope of games. Also, I'm not able to take both because the overlap.

    Read the article

  • mount another drive to the same directory

    - by Ken Autotron
    I recently purchased a server that was advertised as 2TB (2 1TB drives) in size, when I use it it reports only one of the drives, I would like to be able to use both as if one drive. here is the specs... sudo lshw -C disk *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/sda version: MS2O serial: 13EJ81XPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=0005b3dd *-disk description: ATA Disk product: TOSHIBA DT01ACA1 vendor: Toshiba physical id: 0.0.0 bus info: scsi@4:0.0.0 logical name: /dev/sdb version: MS2O serial: 13OX3TKPS size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 signature=00030e86 and fdisk -l Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00030e86 Device Boot Start End Blocks Id System /dev/sdb1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sdb2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sdb3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x0005b3dd Device Boot Start End Blocks Id System /dev/sda1 * 4096 41947135 20971520 fd Linux raid autodetect /dev/sda2 41947136 1952468991 955260928 fd Linux raid autodetect /dev/sda3 1952468992 1953519615 525312 82 Linux swap / Solaris Disk /dev/md2: 978.2 GB, 978187124736 bytes 2 heads, 4 sectors/track, 238815216 cylinders, total 1910521728 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/md1: 21.5 GB, 21474770944 bytes 2 heads, 4 sectors/track, 5242864 cylinders, total 41942912 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table is it possible to mount both drives to say /Home/ so I would have 2TB of usable space?

    Read the article

  • How to start a s3ql script automatically on boot?

    - by ks78
    I've been experimenting with s3ql on Ubuntu 10.04, using it to mount Amazon S3 buckets. However, I'd really like it to mount them automatically. Does anyone know how to do that? I've been working on a script, which works when its run from from the commandline, but for some reason I can't get it to run automatically on boot. Does anyone have any ideas? Here's my script: #! /bin/sh # /etc/init.d/s3ql # ### BEGIN INIT INFO # Provides: s3ql # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Start daemon at boot time # Description: Enable service provided by daemon. ### END INIT INFO case "$1" in start) # Redirect stdout and stderr into the system log DIR=$(mktemp -d) mkfifo "$DIR/LOG_FIFO" logger -t s3ql -p local0.info < "$DIR/LOG_FIFO" & exec > "$DIR/LOG_FIFO" exec 2>&1 rm -rf "$DIR" modprobe fuse fsck.s3ql --batch s3://mybucket exec mount.s3ql --allow-other s3://mybucket /mnt/s3fs ;; stop) umount.s3ql /mnt/s3fs ;; *) echo "Usage: /etc/init.d/s3ql{start|stop}" exit 1 ;; esac exit 0

    Read the article

  • Cannot make Wi-Fi hotspot

    - by Junaid
    I want to make my laptop as wi-fi hotspot. To do so, I pressed button Settings-Network-Wireless-Use as Hotspot.. Then it creates a wireless network connection 'Hostspot' and connects. But it gets disconnected just after making connection. Here is my network hardware info: sudo lshw -C network *-network description: Wireless interface product: WiFi Link 5100 vendor: Intel Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: wlan0 version: 00 serial: width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-12-generic firmware=8.83.5.1 build 33692 latency=0 link=no multicast=yes wireless=IEEE 802.11abgn resources: irq:47 memory:d9200000-d9201fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:05:00.0 logical name: eth0 version: 02 serial: size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=N/A ip=192.168.1.4 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:45 ioport:2000(size=256) memory:d5010000-d5010fff memory:d5000000-d500ffff memory:d5020000-d502ffff My machine is running on Ubuntu 11.10 Can anybody help in it? Thanks

    Read the article

  • Posting to Facebook Page using C# SDK from "offline" app

    - by James Crowley
    If you want to post to a facebook page using the Facebook Graph API and the Facebook C# SDK, from an "offline" app, there's a few steps you should be aware of. First, you need to get an access token that your windows service or app can permanently use. You can get this by visiting the following url (all on one line), replacing [ApiKey] with your applications Facebook API key. http://www.facebook.com/login.php?api_key=[ApiKey]&connect_display=popup&v=1.0 &next=http://www.facebook.com/connect/login_success.html&cancel_url=http://www.facebook.com/connect/login_failure.html &fbconnect=true&return_session=true&req_perms=publish_stream,offline_access,manage_pages&return_session=1 &sdk=joey&session_version=3 In the parameters of the URL you get redirected to, this will give you an access key. Note however, that this only gives you an access key to post to your own profile page. Next, you need to get a separate access key to post to the specific page you want to access. To do this, go to https://graph.facebook.com/[YourUserId]/accounts?access_token=[AccessTokenFromAbove] You can find your user id in the URL when you click on your profile image. On this page, you will then see a list of page IDs and corresponding access tokens for each facebook page. Using the appropriate pair, you can then use code like this: var app = new Facebook.FacebookApp(_accessToken); var parameters = new Dictionary { { "message" , promotionInfo.TagLine }, { "name" , promotionInfo.Title }, { "description" , promotionInfo.Description }, { "picture", promotionInfo.ImageUrl.ToString() }, { "caption" , promotionInfo.TargetUrl.Host }, { "link" , promotionInfo.TargetUrl.ToString() }, { "type" , "link" }, }; app.Post(_targetId + "/feed", parameters); And you're done!

    Read the article

  • Breadcrumbs RDFA

    - by Saahil Sinha
    Have implemented Breadcrumb RDFA http://www.mycarhelpline.com/index.php?option=com_forms&view=pages&layout=sellcar&Itemid=4 While checking the page , the RDFA Data shows property: title: Home https://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Fwww.mycarhelpline.com%2Findex.php%3Foption%3Dcom_forms%26view%3Dpages%26layout%3Dsellcar%26Itemid%3D4 However, when i compare ours with others http://www.google.com/webmasters/tools/richsnippets?q=http%3A%2F%2Froyalenfield.com%2Fmotorcycles%2Fthunderbird-500%2F The Title and Description of the current page is shown in every RDFA Data, which is not shown in ours If someone could suggest - how to get the page title and description show up in RDFA Data, below is our breadcrumb code <p><span class="breadcrumbs pathway"> <span typeof="v:Breadcrumb"> <a href="" rel="v:url" property="v:title">Home</a> &raquo; <span rel="v:child"> <span typeof="v:Breadcrumb"> <a href="index.php?option=com_forms&view=pages&layout=selloldcarindelhi&Itemid=4" rel="v:url" property="v:title">Sell Car</a> &raquo; <span rel="v:child"> <span typeof="v:Breadcrumb"> <a property="v:title" >Sell Used Car</a> </span> </span> </span> </span> </span>

    Read the article

  • Is my use case diagram correct?

    - by Dummy Derp
    NOTE: I am self studying UML so I have nobody to verify my diagrams and hence I am posting here, so please bear with me. This is the problem I got from some PDF available on Google that simply had the following problem statement: Problem Statement: A library contains books and journals. The task is to develop a computer system for borrowing books. In order to borrow a book the borrower must be a member of the library. There is a limit on the number of books that can be borrowed by each member of the library. The library may have several copies of a given book. It is possible to reserve a book. Some books are for short term loans only. Other books may be borrowed for 3 weeks. Users can extend the loans. 1. Draw a use case diagram for a library. 2. Give a use case description for two use cases: • Borrow copy of book • Extend loan Diagram: Use case description: 1. Borrow a copy of the book: If the person wishes to borrow a book from Derpville Public Library, he/she must be a member of the library in which case they will be allowed to issue a certain number of books. If the person is not a member, the book will not be issued to them for taking away, rather they will have to sit and read in the library. 2. Extending loan: Some books will be lent for 3 weeks while others will be lent for more than 3 weeks in which case the person borrowing has to come to the library and get the date extended. There is a limit on how much the user can extend the date of a particular book.

    Read the article

  • Loading a new instance of a class through XML not working quite right

    - by Thegluestickman
    I'm having trouble with XML and XNA. I want to be able to load weapon settings through XML to make my weapons easier to make and to have less code in the actual project file. So I started out making a basic XML document, something to just assign variables with. But no matter what I changed it gave me a new error every time. The code below gives me a "XML element 'Tag' not found", I added and it started to say the variables weren't found. What I wanted to do in the XML file as well, was load a texture for the file too. So I created a static class to hold my texture values, then in the Texture tag of my XML document I would set it to that instance too. I think that's were the problems are occuring because that's where the "XML element 'Tag' not found" error is pointing me too. My XML document: <XnaContent> <Asset Type="ConversationEngine.Weapon"> <weaponStrength>0</weaponStrength> <damageModifiers>0</damageModifiers> <speed>0</speed> <magicDefense>0</magicDefense> <description>0</description> <identifier>0</identifier> <weaponTexture>LoadWeaponTextures.ironSword</weaponTexture> </Asset> </XnaContent> My Class to load the weapon XML: public static class LoadWeaponXML { static Weapon Weapons; public static Weapon WeaponLoad(ContentManager content, int id) { Weapons = content.Load<Weapon>(@"Weapons/" + id); return Weapons; } } public static class LoadWeaponTextures { public static Texture2D ironSword; public static void TextureLoad(ContentManager content) { ironSword = content.Load<Texture2D>("Sword"); } } I'm not entirely sure if you can load textures through XML, but any help would be greatly appreciated.

    Read the article

  • XNA Guide text input - maximum length

    - by simonalexander2005
    so I am using Guide.BeginShowKeyboardInput to get the user to enter their username. I would like this to be limited to 20 characters, and it seems to break expected behaviour to let them input whatever they like and trim it later - so how would I go about limiting what they can input in the text box itself? I have the following code: public string GetKeyboardInput(string title, string description, string defaultText, int maxLength) { if (input.CheckCancel()) { useKeyboardResult = false; KeyboardResult = null; } if (KeyboardResult == null && !Guide.IsVisible) { KeyboardResult = Guide.BeginShowKeyboardInput(PlayerIndex.One, title, description, defaultText, null, null); useKeyboardResult = true; } else if (KeyboardResult != null && KeyboardResult.IsCompleted) { string result = Guide.EndShowKeyboardInput(KeyboardResult); KeyboardResult = null; if (result == null) { useKeyboardResult = false; return null; } if (useKeyboardResult) { KeyboardResult = null; return result; } } else //the user is still entering inputs { } return null; } I assume the code I need would go in that final, empty else{} block, but I can't see any way to do this. Does anyone know how?

    Read the article

  • How do I compile a Wikipedia lens and install?

    - by user49523
    I read a tutorial about how to compile and install a Wikipedia lens, but it didn't work. The tutorial sounds easy - i just copied and pasted to the file that was suppose to edit. I have tried some times and here are 2 edits edit 1: import logging import optparse import gettext from gettext import gettext as _ gettext.textdomain('wikipedia') from singlet.lens import SingleScopeLens, IconViewCategory, ListViewCategory from wikipedia import wikipediaconfig import urllib2 import simplejson class WikipediaLens(SingleScopeLens): wiki = "http://en.wikipedia.org" def wikipedia_query(self,search): try: search = search.replace(" ", "|") url = ("%s/w/api.php?action=opensearch&limit=25&format=json&search=%s" % (self.wiki, search)) results = simplejson.loads(urllib2.urlopen(url).read()) print "Searching Wikipedia" return results[1] except (IOError, KeyError, urllib2.URLError, urllib2.HTTPError, simplejson.JSONDecodeError): print "Error : Unable to search Wikipedia" return [] class Meta: name = 'Wikipedia' description = 'Wikipedia Lens' search_hint = 'Search Wikipedia' icon = 'wikipedia.svg' search_on_blank=True # TODO: Add your categories articles_category = ListViewCategory("Articles", "dialog-information-symbolic") def search(self, search, results): for article in self.wikipedia_query(search): results.append("%s/wiki/%s" % (self.wiki, article), "http://upload.wikimedia.org/wikipedia/commons/6/63/Wikipedia-logo.png", self.articles_category, "text/html", article, "Wikipedia Article", "%s/wiki/%s" % (self.wiki, article)) pass edit 2: import urllib2 import simplejson import logging import optparse import gettext from gettext import gettext as _ gettext.textdomain('wikipediaa') from singlet.lens import SingleScopeLens, IconViewCategory, ListViewCategory from wikipediaa import wikipediaaconfig class WikipediaaLens(SingleScopeLens): wiki = "http://en.wikipedia.org" def wikipedia_query(self,search): try: search = search.replace(" ", "|") url = ("%s/w/api.php?action=opensearch&limit=25&format=json&search=%s" % (self.wiki, search)) results = simplejson.loads(urllib2.urlopen(url).read()) print "Searching Wikipedia" return results[1] except (IOError, KeyError, urllib2.URLError, urllib2.HTTPError, simplejson.JSONDecodeError): print "Error : Unable to search Wikipedia" return [] def search(self, search, results): for article in self.wikipedia_query(search): results.append("%s/wiki/%s" % (self.wiki, article), "http://upload.wikimedia.org/wikipedia/commons/6/63/Wikipedia-logo.png", self.articles_category, "text/html", article, "Wikipedia Article", "%s/wiki/%s" % (self.wiki, article)) pass class Meta: name = 'Wikipedia' description = 'Wikipedia Lens' search_hint = 'Search Wikipedia' icon = 'wikipedia.svg' search_on_blank=True # TODO: Add your categories articles_category = ListViewCategory("Articles", "dialog-information-symbolic") def search(self, search, results): # TODO: Add your search results results.append('https://wiki.ubuntu.com/Unity/Lenses/Singlet', 'ubuntu-logo', self.example_category, "text/html", 'Learn More', 'Find out how to write your Unity Lens', 'https://wiki.ubuntu.com/Unity/Lenses/Singlet') pass so .. what can i change in the edit ? (if anybody give me the entire edit file edited i will appreciate)

    Read the article

  • How to protect UI components using OPSS Resource Permissions

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-alt:solid black .5pt; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insidev:.5pt solid black; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} ADF security protects ADF bound pages, bounded task flows and ADF Business Components entities with framework specific JAAS permissions classes (RegionPermission, TaskFlowPermission and EntityPermission). If used in combination with the ADF security expression language and security checks performed in Java, this protection already provides you with fine grained access control that can also be used to secure UI components like buttons and input text field. For example, the EL shown below disables the user profile panel tabs for unauthenticated users: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem        text="User Profile" id="sdi2"                                       disabled="#{!securityContext.authenticated}">   </af:showDetailItem>   ... </af:panelTabbed> The next example disables a panel tab item if the authenticated user is not granted access to the bounded task flow exposed in a region on this tab: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem text="Employees Overview" id="sdi4"                        disabled="#{!securityContext.taskflowViewable         ['/WEB-INF/EmployeeUpdateFlow.xml#EmployeeUpdateFlow']}">   </af:showDetailItem>   ... </af:panelTabbed> Security expressions like shown above allow developers to check the user permission, authentication and role membership status before showing UI components. Similar, using Java, developers can use code like shown below to verify the user authentication status: ADFContext adfContext = ADFContext.getCurrent(); SecurityContext securityCtx = adfContext.getSecurityContext(); boolean userAuthenticated = securityCtx.isAuthenticated(); Note that the Java code lines use the same security context reference that is used with expression language. But is this all that there is? No ! The goal of ADF Security is to enable all ADF developers to build secure web application with JAAS (Java Authentication and Authorization Service). For this, more fine grained protection can be defined using the ResourcePermission, a generic JAAS permission class owned by the Oracle Platform Security Services (OPSS).  Using the ResourcePermission  class, developers can grant permission to functional parts of an application that are not protected by page or task flow security. For example, an application menu allows creating and canceling product shipments to customers. However, only a specific user group - or application role, which is the better way to use ADF Security - is allowed to cancel a shipment. To enforce this rule, a permission is needed that can be used declaratively on the UI to hide a menu entry and programmatically in Java to check the user permission before the action is performed. Note that multiple lines of defense are what you should implement in your application development. Don't just rely on UI protection through hidden or disabled command options. To create menu protection permission for an ADF Security enable application, you choose Application | Secure | Resource Grants from the Oracle JDeveloper menu. The opened editor shows a visual representation of the jazn-data.xml file that is used at design time to define security policies and user identities for testing. An option in the Resource Grants section is to create a new Resource Type. A list of pre-defined types exists for you to create policy definitions for. Many of these pre-defined types use the ResourcePermission class. To create a custom Resource Type, for example to protect application menu functions, you click the green plus icon next to the Resource Type select list. The Create Resource Type editor that opens allows you to add a name for the resource type, a display name that is shown when granting resource permissions and a description. The ResourcePermission class name is already set. In the menu protection sample, you add the following information: Name: MenuProtection Display Name: Menu Protection Description: Permission to grant menu item permissions OK the dialog to close the resource permission creation. To create a resource policy that can be used to check user permissions at runtime, click the green plus icon in the Resources section of the Resource Grants section. In the Create Resource dialog, provide a name for the menu option you want to protect. To protect the cancel shipment menu option, create a resource with the following settings Resource Type: Menu Protection Name: Cancel Shipment Display Name: Cancel Shipment Description: Grant allows user to cancel customer good shipment   A new resource Cancel Shipmentis added to the Resources panel. Initially the resource is not granted to any user, enterprise or application role. To grant the resource, click the green plus icon in the Granted To section, select the Add Application Role option and choose one or more application roles in the opened dialog. Finally, you click the process action to define the policy. Note that permission can have multiple actions that you can grant individually to users and roles. The cancel shipment permission for example could have another action "view" defined to determine which user should see that this option exist and which users don't. To use the cancel shipment permission, select the disabled property on a command item, like af:commandMenuItem and click the arrow icon on the right. From the context menu, choose the Expression Builder entry. Expand the ADF Bindings | securityContext node and click the userGrantedResource option. Hint: You can expand the Description panel below the EL selection panel to see an example of how the grant should look like. The EL that is created needs to be manually edited to show as #{!securityContext.userGrantedResource[               'resourceName=Cancel Shipment;resourceType=MenuProtection;action=process']} OK the dialog so the permission checking EL is added as a value to the disabled property. Running the application and expanding the Shipment menu shows the Cancel Shipments menu item disabled for all users that don't have the custom menu protection resource permission granted. Note: Following the steps listed above, you create a JAAS permission and declaratively configure it for function security in an ADF application. Do you need to understand JAAS for this? No!  This is one of the benefits that you gain from using the ADF development framework. To implement multi lines of defense for your application, the action performed when clicking the enabled "Cancel Shipments" option should also check if the authenticated user is allowed to use process it. For this, code as shown below can be used in a managed bean public void onCancelShipment(ActionEvent actionEvent) {       SecurityContext securityCtx =       ADFContext.getCurrent().getSecurityContext();   //create instance of ResourcePermission(String type, String name,   //String action)   ResourcePermission resourcePermission =     new ResourcePermission("MenuProtection","Cancel Shipment",                            "process");        boolean userHasPermission =          securityCtx.hasPermission(resourcePermission);   if (userHasPermission){       //execute privileged logic here   } } Note: To learn more abput ADF Security, visit http://download.oracle.com/docs/cd/E17904_01/web.1111/b31974/adding_security.htm#BGBGJEAHNote: A monthly summary of OTN Harvest blog postings can be downloaded from ADF Code Corner. The monthly summary is a PDF document that contains supporting screen shots for some of the postings: http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html

    Read the article

  • Missing Wireless access point

    - by nvcnvn
    I have an: Ubuntu 12.04 with proprietary Boardcom STA Wireless driver installed BCM43224 802.11a/b/g/n wireless card TP-LINK TD-W8101G 54Mbps Wireless ADSL2+ Modem Router Yesterday before I went to sleep, I still connect to my wireless access point, this morning when I start my laptop I don't see it in the list - there are some neighbors listed but not mine. The WLAN is green, with my old Nokia E72, I still see my access point with 100% signal. I have tried to restart my laptop and turn the firmware off/on by the switch but no help. I have read the WirelessTroubleShootingGuide but I can do anything. Here is some of my card info: *-network description: Wireless interface product: BCM43224 802.11a/b/g/n vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: eth1 version: 01 serial: c0:cb:38:06:5d:53 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11abgn resources: irq:17 memory:f0500000-f0503fff *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 03 serial: f0:4d:a2:42:ab:e4 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8169 driverversion=2.3LK-NAPI duplex=full firmware=rtl_nic/rtl8168d-1.fw ip=192.168.1.3 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:42 ioport:5000(size=256) memory:f0404000-f0404fff memory:f0400000-f0403fff memory:f0420000-f043ffff

    Read the article

  • two possible wifi devices competing, one is hard blocked - unable to connect wireless

    - by patrickmw
    blacklisted acer_wmi because that was showing up in the rfkill list then ideapad_wlan was listed $ rfkill list wifi 1: ideapad_wlan: Wireless LAN Soft blocked: no Hard blocked: no 3: brcmwl-0: Wireless LAN Soft blocked: no Hard blocked: yes $ lshw -C network *-network description: Ethernet interface product: AR8131 Gigabit Ethernet vendor: Atheros Communications physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: c0 serial: f0:de:f1:12:21:e9 size: 1Gbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI duplex=full firmware=N/A ip=192.168.1.139 latency=0 link=yes multicast=yes port=twisted pair speed=1Gbit/s resources: irq:42 memory:f0400000-f043ffff ioport:2000(size=128) *-network description: Wireless interface product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:04:00.0 logical name: eth1 version: 01 serial: ac:81:12:38:ba:89 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11 resources: irq:17 memory:f0500000-f0503fff contents of /var/lib/NetworkManager/NetworkManager.state [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true I'm not sure how to disable the wifi devices independently. I'm also not sure which device is the correct one. I think its the brcmw device. Any suggestions?

    Read the article

  • SlimDX Texture2D from DataRectangle array

    - by Rebekah Bryant
    I'm totally new to DirectX. I'm using SlimDX to create a texture consisting of 13046 DataRectangles. Here's my code. It's breaking on the Texture2D constructor with "E_INVALIDARG: An invalid parameter was passed to the returning function (-2147024809)." inParms is just a struct containing handle to a Panel. public Renderer(Parameters inParms, ref DataRectangle[] inShapes) { Texture2DDescription description = new Texture2DDescription() { Width = 500, Height = 500, MipLevels = 1, ArraySize = inShapes.Length, Format = Format.R32G32B32_Float, SampleDescription = new SampleDescription(1, 0), Usage = ResourceUsage.Default, BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource, CpuAccessFlags = CpuAccessFlags.None, OptionFlags = ResourceOptionFlags.None }; SwapChainDescription chainDescription = new SwapChainDescription() { BufferCount = 1, IsWindowed = true, Usage = Usage.RenderTargetOutput, ModeDescription = new ModeDescription(0, 0, new Rational(60, 1), Format.R8G8B8A8_UNorm), SampleDescription = new SampleDescription(1, 0), Flags = SwapChainFlags.None, OutputHandle = inParms.Handle, SwapEffect = SwapEffect.Discard }; Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.None, chainDescription, out mDevice, out mSwapChain); Texture2D texture = new Texture2D(Device, description, inShapes); } EDIT: Running with the Debug flag set, I got: D3D11 ERROR: ID3D11Device::CreateTexture2D: The format (0x6, R32G32B32_FLOAT) cannot be bound as a RenderTarget, or cast to a format that could be bound as a RenderTarget. This is because the current graphics implementation does not even support this Format. Therefore this format does not support D3D11_BIND_RENDER_TARGET. Use CheckFormatSupport to check Format support. [ STATE_CREATION ERROR #92: CREATETEXTURE2D_UNSUPPORTEDFORMAT] D3D11 ERROR: ID3D11Device::CreateTexture2D: Returning E_INVALIDARG, meaning invalid parameters were passed. [ STATE_CREATION ERROR #104: CREATETEXTURE2D_INVALIDARG_RETURN]

    Read the article

< Previous Page | 40 41 42 43 44 45 46 47 48 49 50 51  | Next Page >