Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 185/1952 | < Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >

  • SQL SERVER – Import CSV into Database – Transferring File Content into a Database Table using CSVexpress

    - by pinaldave
    One of the most common data integration tasks I run into is a desire to move data from a file into a database table.  Generally the user is familiar with his data, the structure of the file, and the database table, but is unfamiliar with data integration tools and therefore views this task as something that is difficult.  What these users really need is a point and click approach that minimizes the learning curve for the data integration tool.  This is what CSVexpress (www.CSVexpress.com) is all about!  It is based on expressor Studio, a data integration tool I’ve been reviewing over the last several months. With CSVexpress, moving data between data sources can be as simple as providing the database connection details, describing the structure of the incoming and outgoing data and then connecting two pre-programmed operators.   There’s no need to learn the intricacies of the data integration tool or to write code.  Let’s look at an example. Suppose I have a comma separated value data file with data similar to the following, which is a listing of terminated employees that includes their hiring and termination date, department, job description, and final salary. EMP_ID,STRT_DATE,END_DATE,JOB_ID,DEPT_ID,SALARY 102,13-JAN-93,24-JUL-98 17:00,Programmer,60,"$85,000" 101,21-SEP-89,27-OCT-93 17:00,Account Representative,110,"$65,000" 103,28-OCT-93,15-MAR-97 17:00,Account Manager,110,"$75,000" 304,17-FEB-96,19-DEC-99 17:00,Marketing,20,"$45,000" 333,24-MAR-98,31-DEC-99 17:00,Data Entry Clerk,50,"$35,000" 100,17-SEP-87,17-JUN-93 17:00,Administrative Assistant,90,"$40,000" 334,24-MAR-98,31-DEC-98 17:00,Sales Representative,80,"$40,000" 400,01-JAN-99,31-DEC-99 17:00,Sales Manager,80,"$55,000" Notice the concise format used for the date values, the fact that the termination date includes both date and time information, and that the salary is clearly identified as money by the dollar sign and digit grouping.  In moving this data to a database table I want to express the dates using a format that includes the century since it’s obvious that this listing could include employees who left the company in both the 20th and 21st centuries, and I want the salary to be stored as a decimal value without the currency symbol and grouping character.  Most data integration tools would require coding within a transformation operation to effect these changes, but not expressor Studio.  Directives for these modifications are included in the description of the incoming data. Besides starting the expressor Studio tool and opening a project, the first step is to create connection artifacts, which describe to expressor where data is stored.  For this example, two connection artifacts are required: a file connection, which encapsulates the file system location of my file; and a database connection, which encapsulates the database connection information.  With expressor Studio, I use wizards to create these artifacts. First click New Connection > File Connection in the Home tab of expressor Studio’s ribbon bar, which starts the File Connection wizard.  In the first window, I enter the path to the directory that contains the input file.  Note that the file connection artifact only specifies the file system location, not the name of the file. Then I click Next and enter a meaningful name for this connection artifact; clicking Finish closes the wizard and saves the artifact. To create the Database Connection artifact, I must know the location of, or instance name, of the target database and have the credentials of an account with sufficient privileges to write to the target table.  To use expressor Studio’s features to the fullest, this account should also have the authority to create a table. I click the New Connection > Database Connection in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  expressor Studio includes high-performance drivers for many relational database management systems, so I can simply make a selection from the “Supplied database drivers” drop down control.  If my desired RDBMS isn’t listed, I can optionally use an existing ODBC DSN by selecting the “Existing DSN” radio button. In the following window, I enter the connection details.  With Microsoft SQL Server, I may choose to use Windows Authentication rather than rather than account credentials.  After clicking Next, I enter a meaningful name for this connection artifact and clicking Finish closes the wizard and saves the artifact. Now I create a schema artifact, which describes the structure of the file data.  When expressor reads a file, all data fields are typed as strings.  In some use cases this may be exactly what is needed and there is no need to edit the schema artifact.  But in this example, editing the schema artifact will be used to specify how the data should be transformed; that is, reformat the dates to include century designations, change the employee and job ID’s to integers, and convert the salary to a decimal value. Again a wizard is used to create the schema artifact.  I click New Schema > Delimited Schema in the Home tab of expressor Studio’s ribbon bar, which starts the Database Connection wizard.  In the first window, I click Get Data from File, which then displays a listing of the file connections in the project.  When I click on the file connection I previously created, a browse window opens to this file system location; I then select the file and click Open, which imports 10 lines from the file into the wizard. I now view the file’s content and confirm that the appropriate delimiter characters are selected in the “Field Delimiter” and “Record Delimiter” drop down controls; then I click Next. Since the input file includes a header row, I can easily indicate that fields in the file should be identified through the corresponding header value by clicking “Set All Names from Selected Row. “ Alternatively, I could enter a different identifier into the Field Details > Name text box.  I click Next and enter a meaningful name for this schema artifact; clicking Finish closes the wizard and saves the artifact. Now I open the schema artifact in the schema editor.  When I first view the schema’s content, I note that the types of all attributes in the Semantic Type (the right-hand panel) are strings and that the attribute names are the same as the field names in the data file.  To change an attribute’s name and type, I highlight the attribute and click Edit in the Attributes grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Attribute window; I can change the attribute name and select the desired type from the “Data type” drop down control.  In this example, I change the name of each attribute to the name of the corresponding database table column (EmployeeID, StartingDate, TerminationDate, JobDescription, DepartmentID, and FinalSalary).  Then for the EmployeeID and DepartmentID attributes, I select Integer as the data type, for the StartingDate and TerminationDate attributes, I select Datetime as the data type, and for the FinalSalary attribute, I select the Decimal type. But I can do much more in the schema editor.  For the datetime attributes, I can set a constraint that ensures that the data adheres to some predetermined specifications; a starting date must be later than January 1, 1980 (the date on which the company began operations) and a termination date must be earlier than 11:59 PM on December 31, 1999.  I simply select the appropriate constraint and enter the value (1980-01-01 00:00 as the starting date and 1999-12-31 11:59 as the termination date). As a last step in setting up these datetime conversions, I edit the mapping, describing the format of each datetime type in the source file. I highlight the mapping line for the StartingDate attribute and click Edit Mapping in the Mappings grouping on the Schema > Edit tab of the editor’s ribbon bar.  This opens the Edit Mapping window in which I either enter, or select, a format that describes how the datetime values are represented in the file.  Note the use of Y01 as the syntax for the year.  This syntax is the indicator to expressor Studio to derive the century by setting any year later than 01 to the 20th century and any year before 01 to the 21st century.  As each datetime value is read from the file, the year values are transformed into century and year values. For the TerminationDate attribute, my format also indicates that the datetime value includes hours and minutes. And now to the Salary attribute. I open its mapping and in the Edit Mapping window select the Currency tab and the “Use currency” check box.  This indicates that the file data will include the dollar sign (or in Europe the Pound or Euro sign), which should be removed. And on the Grouping tab, I select the “Use grouping” checkbox and enter 3 into the “Group size” text box, a comma into the “Grouping character” text box, and a decimal point into the “Decimal separator” character text box. These entries allow the string to be properly converted into a decimal value. By making these entries into the schema that describes my input file, I’ve specified how I want the data transformed prior to writing to the database table and completely removed the requirement for coding within the data integration application itself. Assembling the data integration application is simple.  Onto the canvas I drag the Read File and Write Table operators, connecting the output of the Read File operator to the input of the Write Table operator. Next, I select the Read File operator and its Properties panel opens on the right-hand side of expressor Studio.  For each property, I can select an appropriate entry from the corresponding drop down control.  Clicking on the button to the right of the “File name” text box opens the file system location specified in the file connection artifact, allowing me to select the appropriate input file.  I indicate also that the first row in the file, the header row, should be skipped, and that any record that fails one of the datetime constraints should be skipped. I then select the Write Table operator and in its Properties panel specify the database connection, normal for the “Mode,” and the “Truncate” and “Create Missing Table” options.  If my target table does not yet exist, expressor will create the table using the information encapsulated in the schema artifact assigned to the operator. The last task needed to complete the application is to create the schema artifact used by the Write Table operator.  This is extremely easy as another wizard is capable of using the schema artifact assigned to the Read Table operator to create a schema artifact for the Write Table operator.  In the Write Table Properties panel, I click the drop down control to the right of the “Schema” property and select “New Table Schema from Upstream Output…” from the drop down menu. The wizard first displays the table description and in its second screen asks me to select the database connection artifact that specifies the RDBMS in which the target table will exist.  The wizard then connects to the RDBMS and retrieves a list of database schemas from which I make a selection.  The fourth screen gives me the opportunity to fine tune the table’s description.  In this example, I set the width of the JobDescription column to a maximum of 40 characters and select money as the type of the LastSalary column.  I also provide the name for the table. This completes development of the application.  The entire application was created through the use of wizards and the required data transformations specified through simple constraints and specifications rather than through coding.  To develop this application, I only needed a basic understanding of expressor Studio, a level of expertise that can be gained by working through a few introductory tutorials.  expressor Studio is as close to a point and click data integration tool as one could want and I urge you to try this product if you have a need to move data between files or from files to database tables. Check out CSVexpress in more detail.  It offers a few basic video tutorials and a preview of expressor Studio 3.5, which will support the reading and writing of data into Salesforce.com. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • disallow anonymous bind in openldap

    - by shashank prasad
    Folks, I have followed the instructions here http://tuxnetworks.blogspot.com/2010/06/howto-ldap-server-on-1004-lucid-lynx.html to setup my OpenLdap and its working just fine, except an anonymous user can bind to my server and see the whole user/group structure. LDAP is running over SSL. I have read online that i can add disallow bind_anon and require authc in the slapd.conf file and it will be disabled but there is no slapd.conf file to begin with and since this doesn't use slapd.conf for its configuration as i understand OpenLdap has moved to a cn=config setup so it wont read that file even if i create one. i have looked online without any luck. I believe i need to change something in here olcAccess: to attrs=userPassword by dn="cn=admin,dc=tuxnetworks,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=tuxnetworks,dc=com" write by * read but i am not sure what. Any help is appreciated. Thank you! -shashank

    Read the article

  • "ldap_add: Naming violation (64)" error when configuring OpenLDAP

    - by user3215
    I am following the Ubuntu server guide to configure OpenLDAP on an Ubuntu 10.04 server, but can not get it to work. When I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file: # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Can anyone help me?

    Read the article

  • Programming Pearls (2nd Edition) vs More Programming Pearls: Confessions of a Coder [closed]

    - by Geek
    I have been reading very good reviews of the books by Jon Bentley : Programming Pearls (2nd Edition) More Programming Pearls: Confessions of a Coder. I know that these books have been out there for a long time and I feel bad that I haven't read either one . But it is always better late than never . I understand that the second one was written after the first one . So are these two books complementary to each other ? Do the second one assume that the reader has read the first one ? For some one who haven't read either which one would you propose to read up first ?

    Read the article

  • Prevent auto mounting Android sdcard under Linux Mint

    - by BullShark
    I recently obtained an older Android phone, so that I could test Android Apps on it. I've needed it because I have a Nexus 7 but not older Android versions, hardware, etc. to test on. I'm having a problem with it under Linux Mint with Cinnamon. When I plug the phone in, or remove and plug the sdcard from the phone back to it while the phone is plugged in, Linux automatically mounts the sdcard. This is a problem because once it is mounted under Linux, it dismounts from the phone running Android 2.3.5, and I can no longer test Android Apps I write that require the sdcard to be present, writable. I went to Menu System Tools System Settings System Details Removable Media, and it brings up this window. I have changed the settings to always "Ask what to do" on "Select how media should be handled". However, the sdcard still gets mounted and then I am asked how I want to open these files (media players, photo importers, file browser, etc.). If I click the checkbox for "Never prompt or start programs on media insertion", then the sdcard is mounted, and I am not asked how to open these files. Eject is just a noob word for Ubuntu users that means umount (unmount) like "Adminstrator" is another ubuntu noob word for the root user. And if I unmount the sdcard, the phone doesn't recognize it again until I take the sdcard out and plug it back in. The phone sees it for a brief moment until Linux Mint takes it over. There are 2 possible solutions and maybe more: 1) Prevent Linux from automounting sdcards some how 2) Tell Android not to allow the computer it is plugged into to take over the sdcard, HOW? Edit: I found out how to prevent the sdcard from being automatically mounted: Now it gets recognized by Linux: bullshark@beastlinux ~ $ dmesg | tail -n 25 [597212.218323] sd 21:0:0:0: [sde] Attached SCSI removable disk [597212.218639] sr 21:0:0:1: Attached scsi CD-ROM sr2 [597212.218910] sr 21:0:0:1: Attached scsi generic sg7 type 5 [597217.139373] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597217.140726] sd 21:0:0:0: [sde] No Caching mode page present [597217.140735] sd 21:0:0:0: [sde] Assuming drive cache: write through [597217.143595] sd 21:0:0:0: [sde] No Caching mode page present [597217.143602] sd 21:0:0:0: [sde] Assuming drive cache: write through [597217.152240] sde: sde1 [597389.751008] 4:2:1: cannot get freq at ep 0x84 [597390.238742] 4:2:1: cannot get freq at ep 0x84 [597624.903132] sde: detected capacity change from 1977614336 to 0 [597637.677763] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597637.679616] sd 21:0:0:0: [sde] No Caching mode page present [597637.679626] sd 21:0:0:0: [sde] Assuming drive cache: write through [597637.682508] sd 21:0:0:0: [sde] No Caching mode page present [597637.682515] sd 21:0:0:0: [sde] Assuming drive cache: write through [597637.692758] sde: sde1 [597661.857979] sde: detected capacity change from 1977614336 to 0 [597688.775455] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597688.776814] sd 21:0:0:0: [sde] No Caching mode page present [597688.776823] sd 21:0:0:0: [sde] Assuming drive cache: write through [597688.780055] sd 21:0:0:0: [sde] No Caching mode page present [597688.780062] sd 21:0:0:0: [sde] Assuming drive cache: write through [597688.788639] sde: sde1 bullshark@beastlinux ~ $ However, the phone still unmounts the sdcard upon being detected by Linux. Linux detects but does not mount, and a few seconds later: Edit #2 (Solution): I solved this one by changing the usb connection type (was usb mass storage) :

    Read the article

  • Membership in ASP.Net applications - part 3

    - by nikolaosk
    This is the third post in a series of posts regarding ASP.Net built in membership functionality,providers,controls. You can read the first one post one here . You can read the second post here . In this post I would like to investigate how to use the Membership class methods to achieve the same functionality we have with the login web server controls.The login web server controls live inside the .aspx pages and access the underlying abstract membership classes to perform the desired functionality...(read more)

    Read the article

  • Problems accessing shared folder in Windows Server 2008

    - by Triynko
    In Windows Server 2008, I have a shared folder. For my username: NTFS permission (read/modify) Share Permissions (read/modify) Result when trying to access the share: I can traverse directory and read files, but I cannot write files. When I try to examine my effective permissions, it says "Windows can't calculate the effective permissions for [My Username]". The folder is owned by the Administrators group (the default), and NTFS read/write permissions are granted to my username, which is a member of the Administrators group. I notice that to make any changes to the folder locally require me to acknowledge a UAC prompt. Why does that prompt appear? I also tried creating a new group, giving it full NTFS permissions, and full control in the shared permissions, and added my username to the group. The result is even worse... I cannot even traverse the shared folder directories or read anything at all.

    Read the article

  • External hard drive becomes unwriteable after awhile

    - by Brandon
    I have an external USB hard drive mounted in Ubuntu 9.10 with full read/write abilities over SMB. If it stays on for awhile though, I lose the ability to write to it. I end up having to ssh in and unmount, mount, reboot in order to have write ability again. Any reason on what might be causing this and how to fix it?

    Read the article

  • New event log nowhere to be found after creating in PowerShell

    - by Mega Matt
    Through PowerShell, I am attempting to create a new event log and write a test entry to it, but it is not showing up the Event Viewer. This is the command I'm using to create a new event log: new-eventlog -logname TestLog -source TestLog And to write a new event to it: write-eventlog TestLog -source TestLog -eventid 12345 -message "Test message" After running the first command, there is no "TestLog" log in the event viewer anywhere, and I would expect it to show up in the Applications and Services Logs section. After running the second command, same result. However, I am seeing a registry key for the log at HKLM\SYSTEM\services\eventlog\TestLog. Just not seeing anything in the event viewer. So, 2 questions: When should I be seeing the event log? After it gets created or after I write the first event to it? And, more importantly, why am I not seeing it at all? I'm using Windows Server 2008R2, and am logged in and running the PS as an administrator. Thanks.

    Read the article

  • How to Configure OpenLDAP on Ubuntu 10.04 Server

    - by user3215
    I am following the Ubuntu server guide to configure OpenLDAP on an Ubuntu 10.04 server, but can not get it to work. When I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file: # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Can anyone help me?

    Read the article

  • Quantifying the effects of partition mis-alignment

    - by Matt
    I'm experiencing some significant performance issues on an NFS server. I've been reading up a bit on partition alignment, and I think I have my partitions mis-aligned. I can't find anything that tells me how to actually quantify the effects of mis-aligned partitions. Some of the general information I found suggests the performance penalty can be quite high (upwards of 60%) and others say it's negligible. What I want to do is determine if partition alignment is a factor in this server's performance problems or not; and if so, to what degree? So I'll put my info out here, and hopefully the community can confirm if my partitions are indeed mis-aligned, and if so, help me put a number to what the performance cost is. Server is a Dell R510 with dual E5620 CPUs and 8 GB RAM. There are eight 15k 2.5” 600 GB drives (Seagate ST3600057SS) configured in hardware RAID-6 with a single hot spare. RAID controller is a Dell PERC H700 w/512MB cache (Linux sees this as a LSI MegaSAS 9260). OS is CentOS 5.6, home directory partition is ext3, with options “rw,data=journal,usrquota”. I have the HW RAID configured to present two virtual disks to the OS: /dev/sda for the OS (boot, root and swap partitions), and /dev/sdb for a big NFS share: [root@lnxutil1 ~]# parted -s /dev/sda unit s print Model: DELL PERC H700 (scsi) Disk /dev/sda: 134217599s Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 63s 465884s 465822s primary ext2 boot 2 465885s 134207009s 133741125s primary lvm [root@lnxutil1 ~]# parted -s /dev/sdb unit s print Model: DELL PERC H700 (scsi) Disk /dev/sdb: 5720768639s Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 34s 5720768606s 5720768573s lvm Edit 1 Using the cfq IO scheduler (default for CentOS 5.6): # cat /sys/block/sd{a,b}/queue/scheduler noop anticipatory deadline [cfq] noop anticipatory deadline [cfq] Chunk size is the same as strip size, right? If so, then 64kB: # /opt/MegaCli -LDInfo -Lall -aALL -NoLog Adapter #0 Number of Virtual Disks: 2 Virtual Disk: 0 (target id: 0) Name:os RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:65535MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 ... physical disk info removed for brevity ... Virtual Disk: 1 (target id: 1) Name:share RAID Level: Primary-6, Secondary-0, RAID Level Qualifier-3 Size:2793344MB State: Optimal Stripe Size: 64kB Number Of Drives:7 Span Depth:1 Default Cache Policy: WriteBack, ReadAdaptive, Direct, No Write Cache if Bad BBU Current Cache Policy: WriteThrough, ReadAdaptive, Direct, No Write Cache if Bad BBU Access Policy: Read/Write Disk Cache Policy: Disk's Default Number of Spans: 1 Span: 0 - Number of PDs: 7 If it's not obvious, virtual disk 0 corresponds to /dev/sda, for the OS; virtual disk 1 is /dev/sdb (the exported home directory tree).

    Read the article

  • What is ADO ?

    - by Aamir Hasan
    What is ADO? ADO is a Microsoft technologyADO stands for ActiveX Data ObjectsADO is a Microsoft Active-X componentADO is automatically installed with Microsoft IISADO is a programming interface to access data in a databaseAccessing a Database from an ASP Page The common way to access a database from inside an ASP page is to: Create an ADO connection to a databaseOpen the database connectionCreate an ADO recordsetOpen the recordsetExtract the data you need from the recordsetClose the recordsetClose the connectionExample  <%set conn=Server.CreateObject("ADODB.Connection")conn.Provider="Microsoft.Jet.OLEDB.4.0"conn.Open(Server.Mappath("/db/northwind.mdb"))set rs = Server.CreateObject("ADODB.recordset")rs.Open "Select * from Customers", conndo until rs.EOF    for each x in rs.Fields       Response.Write(x.name)       Response.Write(" = ")       Response.Write(x.value & "<br />")    next    Response.Write("<br />")    rs.MoveNextlooprs.closeconn.close%> 

    Read the article

  • How to configure ldap on ubuntu 10.04 server

    - by user3215
    I am following the link to configure ldap on ubuntu 10.04 server but could not. when I try to use sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif I'm getting the following error: Enter LDAP Password: <entered 'secret' as password> adding new entry "dc=don,dc=com" ldap_add: Naming violation (64) additional info: value of single-valued naming attribute 'dc' conflicts with value present in entry Again when I try to do the same, I'm getting the following error: root@avy-desktop:/home/avy# sudo ldapadd -x -D cn=admin,dc=don,dc=com -W -f frontend.ldif Enter LDAP Password: ldap_bind: Invalid credentials (49) Here is the backend.ldif file # Load dynamic backend modules dn: cn=module,cn=config objectClass: olcModuleList cn: module olcModulepath: /usr/lib/ldap olcModuleload: back_hdb # Database settings dn: olcDatabase=hdb,cn=config objectClass: olcDatabaseConfig objectClass: olcHdbConfig olcDatabase: {1}hdb olcSuffix: dc=don,dc=com olcDbDirectory: /var/lib/ldap olcRootDN: cn=admin,dc=don,dc=com olcRootPW: secret olcDbConfig: set_cachesize 0 2097152 0 olcDbConfig: set_lk_max_objects 1500 olcDbConfig: set_lk_max_locks 1500 olcDbConfig: set_lk_max_lockers 1500 olcDbIndex: objectClass eq olcLastMod: TRUE olcDbCheckpoint: 512 30 olcAccess: to attrs=userPassword by dn="cn=admin,dc=don,dc=com" write by anonymous auth by self write by * none olcAccess: to attrs=shadowLastChange by self write by * read olcAccess: to dn.base="" by * read olcAccess: to * by dn="cn=admin,dc=don,dc=com" write by * read frontend.ldif file: # Create top-level object in domain dn: dc=don,dc=com objectClass: top objectClass: dcObject objectclass: organization o: Example Organization dc: Example description: LDAP Example # Admin user. dn: cn=admin,dc=don,dc=com objectClass: simpleSecurityObject objectClass: organizationalRole cn: admin description: LDAP administrator userPassword: secret dn: ou=people,dc=don,dc=com objectClass: organizationalUnit ou: people dn: ou=groups,dc=don,dc=com objectClass: organizationalUnit ou: groups dn: uid=john,ou=people,dc=don,dc=com objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: john sn: Doe givenName: John cn: John Doe displayName: John Doe uidNumber: 1000 gidNumber: 10000 userPassword: password gecos: John Doe loginShell: /bin/bash homeDirectory: /home/john shadowExpire: -1 shadowFlag: 0 shadowWarning: 7 shadowMin: 8 shadowMax: 999999 shadowLastChange: 10877 mail: [email protected] postalCode: 31000 l: Toulouse o: Example mobile: +33 (0)6 xx xx xx xx homePhone: +33 (0)5 xx xx xx xx title: System Administrator postalAddress: initials: JD dn: cn=example,ou=groups,dc=don,dc=com objectClass: posixGroup cn: example gidNumber: 10000 Anybody could help me?

    Read the article

  • 5 Reasons why I hate WPF

    - by Richard Mitchell
    I decided to use writing a new tool as a way to learn WPF and MVVM and I thought I'd write down a few of my problems as a way of cathartic release. I decided to read a book before attempting WPF for the first time as I've heard others complain about the steep learning curve. I chose the rather excellent "WPF 4 Unleashed" by Adam Nathan to read through and "Pro WPF in C# 2010" by Matthew MacDonald as a reference whilst I programmed. 1 - Poor editing support for XAML The first thing I think any...(read more)

    Read the article

  • please explain my fio results - is O_SYNC|O_DIRECT misbehaving on linux?

    - by Zoltan
    I'm going mad over figuring out what the problem could be with one of our storage boxes. With a simple fio script I'm testing random writes using bs=1M and direct=1. The SSD is a Samsung 840pro attached to an LSI HBA (3Gbit/s ports). This is the result I'm getting under FreeBSD 9.1: WRITE: io=13169MB, aggrb=224743KB/s, minb=224743KB/s, maxb=224743KB/s, mint=60002msec, maxt=60002msec This is regardless of sync being set to 0 or 1. On linux, this is the result with sync=0: WRITE: io=14828MB, aggrb=253060KB/s, minb=253060KB/s, maxb=253060KB/s, mint=60001msec, maxt=60001msec and with sync=1: WRITE: io=6360.0MB, aggrb=108542KB/s, minb=108542KB/s, maxb=108542KB/s, mint=60001msec, maxt=60001msec My understanding is that since I'm operating on the raw block device, O_SYNC should not make any difference - there's no filesystem, any barrier, anything between the writes and the drive itself. Especially with O_DIRECT|O_SYNC set. Any ideas? For reference, here's the fio script I'm testing with: [global] bs=1M ioengine=sync iodepth=4 size=16g direct=1 runtime=60 filename=/dev/sdh sync=1 [rand-write] rw=randwrite stonewall

    Read the article

  • How to remove iso 9660 from USB?

    - by a_m0d
    I have somehow managed to write an iso 9660 image onto my USB drive, which makes all my computer think that the device is actually a CD. I have tried various methods of removing this partition, but nothing seems to work. I have tried fdisk, which says $ fdisk -l /dev/sdb Cannot open /dev/sdb parted crashes when I try to use it on this device. I have even tried $ dd if=/dev/zero of=/dev/sdb but it just hangs with no output (either on screen or on disk). However, when I plug the USB in, it does mount, and I can view (but not edit) the files on it. edit: now the result is $ dd if=/dev/zero of=/dev/sdb dd: opening `/dev/sdb': Read-only file system I have also tried re-formatting it on Windows, but it gets to the end of the format process and then says "Couldn't format the drive". How can I remove this partition and get my whole USB drive back to normal again? EDIT 1: Trying a simple mkfs doesn't work: $ sudo mkfs -t vfat /dev/sdb mkfs.vfat 3.0.0 (28 Sep 2008) mkfs.vfat: Will not try to make filesystem on full-disk device '/dev/sdb' (use -I if wanted) I can't do mkfs on /dev/sdb1 because there is no such partition, as shown:$ ls /dev | grep sdb sdb EDIT 2: This is the information posted by dmesg when I plug the device in:$ dmesg . . (snip) . usb 2-1: New USB device found, idVendor=058f, idProduct=6387 usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 usb 2-1: Product: Mass Storage usb 2-1: Manufacturer: Generic usb 2-1: SerialNumber: G0905000000000010885 usb-storage: device found at 4 usb-storage: waiting for device to settle before scanning usb-storage: device scan complete scsi 6:0:0:0: Direct-Access FLASH Drive AU_USB20 8.07 PQ: 0 ANSI: 2 sd 6:0:0:0: [sdb] 4069376 512-byte hardware sectors (2084 MB) sd 6:0:0:0: [sdb] Write Protect is off sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 sd 6:0:0:0: [sdb] Assuming drive cache: write through sd 6:0:0:0: [sdb] 4069376 512-byte hardware sectors (2084 MB) sd 6:0:0:0: [sdb] Write Protect is off sd 6:0:0:0: [sdb] Mode Sense: 03 00 00 00 sd 6:0:0:0: [sdb] Assuming drive cache: write through sdb: unknown partition table sd 6:0:0:0: [sdb] Attached SCSI removable disk sd 6:0:0:0: Attached scsi generic sg2 type 0 ISO 9660 Extensions: Microsoft Joliet Level 3 ISO 9660 Extensions: RRIP_1991A SELinux: initialized (dev sdb, type iso9660), uses genfs_contexts CE: hpet increasing min_delta_ns to 15000 nsec This shows that the device is formatted as ISO 9660 and that it is /dev/sdb. EDIT 3: This is the message that I find at the bottom of dmesg after running cfdisk and writing a new partition table to the disk:SELinux: initialized (dev sdb, type iso9660), uses genfs_contexts sd 17:0:0:0: [sdb] Device not ready: Sense Key : Not Ready [current] sd 17:0:0:0: [sdb] Device not ready: < ASC=0xff ASCQ=0xffASC=0xff < ASCQ=0xff end_request: I/O error, dev sdb, sector 0 Buffer I/O error on device sdb, logical block 0 lost page write due to I/O error on sdb

    Read the article

  • September IIS Community Newsletter

    - by The Official Microsoft IIS Site
    For the latest news and happenings in the IIS community over the past month, be sure to check out the September edition of the IIS Community Newsletter: http://www.iisnewsletter.com/archive/september2012.html Make sure you don’t miss an edition and get it delivered directly to your inbox. You can subscribe at the link below. http://www.iisnewsletter.com/Subscribe.aspx Thank you....( read more ) Read More......(read more)

    Read the article

  • BizTalk Pipeline Component Error: "Object reference not set to an instance of an object"

    - by Stuart Brierley
    Yesterday I posted about my BizTalk Archiving Pipeline Component, which can be found on Codeplex if anyone is interested in taking a look. During testing of this component I began to encounter an error whereby the component would throw an "Object reference not set to an instance of an object" error when processing as a part of a Custom Pipeline. This was occurring when the component was reading a ReadOnlySeekableStream so that the data can be archived to file, but the actual code throwing the error was somewhere in the depths of the Microsoft.BizTalk.Streaming stack. It turns out that there is a known issue where this exception can be thrown because the garbage collector has disposed of of the stream before execution of the custom pipeline has completed. To get around this you need to add the streams in your code to the pipeline context resource tracker.   So a block of my code goes from:                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } to                         originalStrm = bodyPart.GetOriginalDataStream();                         if (!originalStrm.CanSeek)                         {                             ReadOnlySeekableStream seekableStream = new ReadOnlySeekableStream(originalStrm);                             inmsg.BodyPart.Data = seekableStream;                             originalStrm = inmsg.BodyPart.Data;                         }                         pc.ResourceTracker.AddResource(originalStrm);                         fileArchive = new FileStream(FullPath, FileMode.Create, FileAccess.Write);                         binWriter = new BinaryWriter(fileArchive);                         byte[] buffer = new byte[bufferSize];                         int sizeRead = 0;                         while ((sizeRead = originalStrm.Read(buffer, 0, bufferSize)) != 0)                         {                             binWriter.Write(buffer, 0, sizeRead);                         } So far this seems to have solved the issue, the error is no more, and my archive component is continuing its way through testing.

    Read the article

  • Are books on programming hard to understand?

    - by DarkEnergy
    I've been reading books that are extremely daunting. Accelerated C++ is by far one of the books -- that I haven't finished. I plan too, but that's another story. When reading a programming book, do you find yourself re reading a lot of the paragraphs? Sometimes it takes me like an hour to read 20 pages out of a book. Sometimes they become so daunting that it takes me all day to finish a single chapter. I think having these as e-books makes them even harder to read sometimes, since I'm so used to looking down to read a book or just looking at tangible paper. IDK, just wanting to know if reading these books becomes extremely hard, and do you find yourself rereading the most simplest paragraphs 2-3 times just to get the meaning of it because the previous paragraph left your brain hurting? http://www.it-career-coach.net/2007/03/04/are-computer-programming-books-hard-to-study/ here is a article i read on something similar to this. edit sometimes I find myself reading a whole page... then I look up and say 'wth did I just read'... I could finish a chapter in 30 minutes to an hour and feel this way too...

    Read the article

< Previous Page | 181 182 183 184 185 186 187 188 189 190 191 192  | Next Page >