Search Results

Search found 3659 results on 147 pages for 'sorted hash'.

Page 109/147 | < Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >

  • How do you deal with the conflict between ActiveSupport::JSON and the JSON gem?

    - by Luke Francl
    I am stumped with this problem. ActiveSupport::JSON defines to_json on various core objects and so does the JSON gem. However, the implementation is not the same -- the ActiveSupport version takes arguments and the JSON gem version doesn't. I installed a gem that required the JSON gem and my app broke. The issue is that I'm using to_json in a controller that returns a list of objects, but I want to control which attributes are returned. When code anywhere in my system does require 'json' I get this error message: TypeError: wrong argument type Hash (expected Data) I tried a couple of things that I read online to fix it, but nothing worked. I ended up re-writing the gem to use ActiveSupport::JSON.decode instead of JSON.parse. This works but it's not sustainable...I can't be forking gems every time I want to use a gem that requires the JSON gem. Update: The best solution of this problem is to upgrade to Rails 2.3 or higher, which fixed it.

    Read the article

  • Can I encrypt value in C# and use that with SQL Server 2005 symmetric encryption?

    - by Robert Byrne
    To be more specific, if I create a symmetric key with a specific KEY_SOURCE and ALGORITHM (as described here), is there any way that I can set up the same key and algorithm in C# so that I can encrypt data in code, but have that data decrypted by the symmetric key in Sql Server? From the research I've done so far, it seems that the IDENTITY_VALUE for the key is also baked into the cypher text, making things even more complex. I'm thinking about just trying all the various ways I can think of, ie hashing the KEY_SOURCE using different hash algorithms for a key and trying different ways of encrypting the plain text until I get something that works. Or is that just futile? Has anyone else done this, any pointers? UPDATE Just to clarify, I want to use NHibernate on the client side, but theres a bunch of stored procedures on the database side that still perform decryption.

    Read the article

  • Why does my exchange message filtering rule not work?

    - by Jon Cage
    I have two rules set up to sort incoming bug reports. The first is specific to a single device: Apply this rule after the message arrives sent to SMS Distribution and with <source_device_number>: in the body move it to the BugReports\<source_device_number> folder ..and the second is a catch-all for everything else: Apply this rule after the message arrives sent to SMS Distribution move it to the BugReports folder For some reason though, the first rule never seems to act even though it's higher in the list. So for some reason an email like the following doesn't seem to get caught by the first rule: From: <SourceDeviceUID> To: SMS Distributor Subject: Message from <SourceDeviceUID> Message: <source_device_number>: Device encountered a problem. Details below... ...where <source_device_number> is an integer. The second rule works fine. But for some high-priority devices, I want them automatically sorted. Why might that first rule fail?

    Read the article

  • What CPAN module can summarize Perl error logs?

    - by mithaldu
    I'm maintaining some website code that will soon dump all its errors and warnings into a log file. In order to make this a bit more pro-active I plan to parse this log file daily, summarize the warnings and errors (i.e. count the occurrence of each specific one and group by either warning/error) and then email this to the devs on the project. This would likely admittedly be rather trivial with a hash and some further fiddling, I wondered if there is a suitable module on CPAN that I could use to do this task. It would either be one that summarizes specifically Perl error/warnings logs or one that summarizes arbitrary text files. Any suggestions?

    Read the article

  • Apache in OS X not displaying localhost nor vhosts correctly

    - by Marcus
    I've encountered a really odd problem in my development environment, and I really can't make any sense of it. It started by a locally developed PHP-site refused to update any content I edited in a file – no text or nothing. So if the document was: <h2>Hello!</h2> and I edited it to <h2>What's wrong?!</h2> it still outputed <h2>Hello!</h2>. I thought is was some kind of cache:ing problem, but no "hard reloads" in the browser nor sudo apachectl -k restart sorted it out. Only a restart of my Mac did finally fix it. Now, a few days later even stranger issues are appearing. I have a LAMP-stack installed via Homebrew, in httpd-vhosts.conf I've set ~/Dev/ as my localhost, and I set up a <VirtualHost *80> for each project ("ServerName projectname.dev" for example). However, what ever files of folder I put in ~/Dev/ have stopped showing up on localhost, and new VirtualHost-directives doesn't work. Three projects + "docs" in the folder: But "localhost" only displays the two older projects...? So, as I've said – I've tried restarting Apache (without errors), clearing browser caches (tried in three browsers, Chrome, Safari and Firefox) and ever rebooting the Mac. Nothing. Any ideas? Running OS X 10.8.5 and Apache 2.2.24.

    Read the article

  • How to Store Cookies in Ruby?

    - by viatropos
    I am programmatcally accessing authenticated content in my CDN on Google App Engine, and it's returning a cookie that I'm supposed to store: {"set-cookie"=>"ACSID=cookie-hash; expires=Mon, 12-Apr-2010 01:56:06 GMT; path=/"} What do I do with that? This is my first time dealing with Cookies. I can put in the header of the next request, but what's the recommended way to store that? I'm testing this with irb in the console and when I exit and try again, the cookie is gone. How do I save it for a few days/weeks? I'm using pure ruby without Rails or anything. Thanks so much.

    Read the article

  • What is the sense of permiting the user to use no passwords longer than xx chars?

    - by reox
    Its more like a usability question or maybe database, or even maybe security (consider injection attacks) but what is the sense of permiting the user's password to a be not longer than xx chars? It does not make any sense to me, because longer passwords are mostly considered better and even harder to crack, and some users use password safes, so the password length should not matter. I understand that passwords with more than 20 chars are hardly to remember, but if you use diceware or password safe you dont have any problem with that. I really cant understand why there are sites that say "your password need to be between 5 and 8 chars"... also should the password saved as hash, so the length of the field in the database is fixed, so where is the problem? i think that most of the sites where the password is has to be a fixed length are not even using any hashing method...

    Read the article

  • How to open email by x-gm-msgid in Gmail with Javascript

    - by Rui J
    I'm writing an extension which surfaces links to gmail messages. As the UI loads right in Gmail, I should be able to click on one of these links and have Gmail load it (without refreshing). I have "x-gm-msgid" available and theoretically, I should just be able to navigate to "https://mail.google.com/mail/u/0/#inbox/[x-gm-msgid]". I've tried using location.hash = "#inbox/[x-gm-msgid]" I've tried using history.pushState(null, null, "/mail/u/0/#inbox/[x-gm-msgid]") Neither of which works. Gmail just thwarts any attempt to change the URL (unless it is done via user interaction) Any thoughts on how to get around this restriction?

    Read the article

  • testing .mobile mime format with capybara / rspec

    - by Chris Beck
    For detecting and responding to mobile user agents, I'm using Mime::Type.register_alias "text/html", :mobile and the approach I'm wondering what is the best approach to test with capybara. This article suggests setting up an iphone driver with Capybara.register_driver :iphone do |app| http://blog.plataformatec.com.br/2011/03/configuring-user-agents-with-capybara-selenium-webdriver/ but I'd like a more flexible approach where the mime type is set via the url extension localhost/index.mobile and where I can do this visit user_path( format: :mobile) Rails understands the extension and sets the format in the params hash, but how do I get the url helper methods to add that to all urls as a file extension?

    Read the article

  • Why does Color.IsNamedColor not work when I create a color using Color.FromArgb()?

    - by Jon B
    In my app I allow the user to build a color, and then show him the name or value of the color later on. If the user picks red (full red, not red-ish), I want to show him "red". If he picks some strange color, then the hex value would be just fine. Here's sample code that demonstrates the problem: static string GetName(int r, int g, int b) { Color c = Color.FromArgb(r, g, b); // Note that specifying a = 255 doesn't make a difference if (c.IsNamedColor) { return c.Name; } else { // return hex value } } Even with very obvious colors like red IsNamedColor never returns true. Looking at the ARGB values for my color and Color.Red, I see no difference. However, calling Color.Red.GetHashCode() returns a different hash code than Color.FromArgb(255, 0, 0).GetHashCode(). How can I create a color using user specified RGB values and have the Name property come out right?

    Read the article

  • mdadm: Win7-install created a boot partition on one of my RAID6 drives. How to rebuild?

    - by EXIT_FAILURE
    My problem happened when I attempted to install Windows 7 on it's own SSD. The Linux OS I used which has knowledge of the software RAID system is on a SSD that I disconnected prior to the install. This was so that windows (or I) wouldn't inadvertently mess it up. However, and in retrospect, foolishly, I left the RAID disks connected, thinking that windows wouldn't be so ridiculous as to mess with a HDD that it sees as just unallocated space. Boy was I wrong! After copying over the installation files to the SSD (as expected and desired), it also created an ntfs partition on one of the RAID disks. Both unexpected and totally undesired! . I changed out the SSDs again, and booted up in linux. mdadm didn't seem to have any problem assembling the array as before, but if I tried to mount the array, I got the error message: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I then used qparted to delete the newly created ntfs partition on /dev/sdd so that it matched the other three /dev/sd{b,c,e}, and requested a resync of my array with echo repair > /sys/block/md0/md/sync_action This took around 4 hours, and upon completion, dmesg reports: md: md0: requested-resync done. A bit brief after a 4-hour task, though I'm unsure as to where other log files exist (I also seem to have messed up my sendmail configuration). In any case: No change reported according to mdadm, everything checks out. mdadm -D /dev/md0 still reports: Version : 1.2 Creation Time : Wed May 23 22:18:45 2012 Raid Level : raid6 Array Size : 3907026848 (3726.03 GiB 4000.80 GB) Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon May 26 12:41:58 2014 State : clean Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 4K Name : okamilinkun:0 UUID : 0c97ebf3:098864d8:126f44e3:e4337102 Events : 423 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 8 32 1 active sync /dev/sdc 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde Trying to mount it still reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so and dmesg: EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! EXT4-fs (md0): group descriptors corrupted! I'm a bit unsure where to proceed from here, and trying stuff "to see if it works" is a bit too risky for me. This is what I suggest I should attempt to do: Tell mdadm that /dev/sdd (the one that windows wrote into) isn't reliable anymore, pretend it is newly re-introduced to the array, and reconstruct its content based on the other three drives. I also could be totally wrong in my assumptions, that the creation of the ntfs partition on /dev/sdd and subsequent deletion has changed something that cannot be fixed this way. My question: Help, what should I do? If I should do what I suggested , how do I do that? From reading documentation, etc, I would think maybe: mdadm --manage /dev/md0 --set-faulty /dev/sdd mdadm --manage /dev/md0 --remove /dev/sdd mdadm --manage /dev/md0 --re-add /dev/sdd However, the documentation examples suggest /dev/sdd1, which seems strange to me, as there is no partition there as far as linux is concerned, just unallocated space. Maybe these commands won't work without. Maybe it makes sense to mirror the partition table of one of the other raid devices that weren't touched, before --re-add. Something like: sfdisk -d /dev/sdb | sfdisk /dev/sdd Bonus question: Why would the Windows 7 installation do something so st...potentially dangerous? Update I went ahead and marked /dev/sdd as faulty, and removed it (not physically) from the array: # mdadm --manage /dev/md0 --set-faulty /dev/sdd # mdadm --manage /dev/md0 --remove /dev/sdd However, attempting to --re-add was disallowed: # mdadm --manage /dev/md0 --re-add /dev/sdd mdadm: --re-add for /dev/sdd to /dev/md0 is not possible --add, was fine. # mdadm --manage /dev/md0 --add /dev/sdd mdadm -D /dev/md0 now reports the state as clean, degraded, recovering, and /dev/sdd as spare rebuilding. /proc/mdstat shows the recovery progress: md0 : active raid6 sdd[4] sdc[1] sde[3] sdb[0] 3907026848 blocks super 1.2 level 6, 4k chunk, algorithm 2 [4/3] [UU_U] [>....................] recovery = 2.1% (42887780/1953513424) finish=348.7min speed=91297K/sec nmon also shows expected output: ¦sdb 0% 87.3 0.0| > |¦ ¦sdc 71% 109.1 0.0|RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR > |¦ ¦sdd 40% 0.0 87.3|WWWWWWWWWWWWWWWWWWWW > |¦ ¦sde 0% 87.3 0.0|> || It looks good so far. Crossing my fingers for another five+ hours :) Update 2 The recovery of /dev/sdd finished, with dmesg output: [44972.599552] md: md0: recovery done. [44972.682811] RAID conf printout: [44972.682815] --- level:6 rd:4 wd:4 [44972.682817] disk 0, o:1, dev:sdb [44972.682819] disk 1, o:1, dev:sdc [44972.682820] disk 2, o:1, dev:sdd [44972.682821] disk 3, o:1, dev:sde Attempting mount /dev/md0 reports: mount: wrong fs type, bad option, bad superblock on /dev/md0, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so And on dmesg: [44984.159908] EXT4-fs (md0): ext4_check_descriptors: Block bitmap for group 0 not in group (block 1318081259)! [44984.159912] EXT4-fs (md0): group descriptors corrupted! I'm not sure what do do now. Suggestions? Output of dumpe2fs /dev/md0: dumpe2fs 1.42.8 (20-Jun-2013) Filesystem volume name: Atlas Last mounted on: /mnt/atlas Filesystem UUID: e7bfb6a4-c907-4aa0-9b55-9528817bfd70 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: user_xattr acl Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 244195328 Block count: 976756712 Reserved block count: 48837835 Free blocks: 92000180 Free inodes: 243414877 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 791 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 RAID stripe width: 2 Flex block group size: 16 Filesystem created: Thu May 24 07:22:41 2012 Last mount time: Sun May 25 23:44:38 2014 Last write time: Sun May 25 23:46:42 2014 Mount count: 341 Maximum mount count: -1 Last checked: Thu May 24 07:22:41 2012 Check interval: 0 (<none>) Lifetime writes: 4357 GB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: e177a374-0b90-4eaa-b78f-d734aae13051 Journal backup: inode blocks dumpe2fs: Corrupt extent header while reading journal super block

    Read the article

  • Convert byte array to understandable String

    - by Ender
    I have a program that handles byte arrays in Java, and now I would like to write this into a XML file. However, I am unsure as to how I can convert the following byte array into a sensible String to write to a file. Assuming that it was Unicode characters I attempted the following code: String temp = new String(encodedBytes, "UTF-8"); Only to have the debugger show that the encodedBytes contain "\ufffd\ufffd ^\ufffd\ufffd-m\ufffd\ufffd\/ufffd \ufffd\ufffdIA\ufffd\ufffd". The String should contain a hash in alphanumerical format. How would I turn the above String into a sensible String for output?

    Read the article

  • File Open/Save Dialog always 'Not Responding'

    - by Amanda
    I am aware that this question has been asked once, however the solution for them didn't work with me. Whenever I go to open/save a file in any program, the dialog does not come up, and the application goes to 'Not Responding'. This goes on for about a few minutes, and then stops, but still does not open the dialog. There have been a few occasions where the dialog has suddenly worked for a while, but then the problem comes back. I have tried many solutions given around the internet, I have cleaned it with CCleaner, disk defragged it, sorted the index. Nothing works. Is there anybody who has any idea what the problem is? This is Windows Vista. I'm not quite sure what kind of information you guys would need about my laptop, but I'll give you it if you need it. :) Solutions I have Tried: I have tried deleting the 'Shellicon' folder in the registry, which wasn't even in there. I have looked for mapped network drives lingering around, and I haven't come across any. I have tried rebuilding teh Widnows Search Indexes...no difference.

    Read the article

  • How to verify a digital signature with openssl

    - by Aaron Carlino
    I'm using a thirdparty credit card processing service (Paybox) that, after a successful transaction, redirects back to the website with a signature in the URL as a security measure to prevent people from manipulating data. It's supposed to prove that the request originated from this service. So my success URL looks something like this: /success.php?signature=[HUGE HASH] I have no idea where to start with verifying this signature. This service does provide a public key, and I assume I need to create a private key, but I don't know much beyond that. I'm pretty good with linux, and I know I'll have to run some openssl commands. I'm writing the verification script in PHP, which also has native openssl() functions. If anyone could please push me in the right direction with some pseudo code, or even functional code, I'd be very grateful. Thanks.

    Read the article

  • I've just set up FreeBSD 8.0 and can't login with ssh

    - by Matt
    /etc/hosts.allow is set to allow any protocol from anywhere. I can "ssh localhost" and it works. I simply get "connection refused" from putty on another machine. Any ideas? Will try to get a copy of the sshd_server.conf file as soon as I can find a flash disk to copy it to, but I thought someone might know what you need to set initially to permit login. EDIT: I think I can see why it's not working now. If I telnet to the IP address of the server I'm seeing MGE UPS SYSTEMS SNMP Web/Agent configuration menu. Enter Password: Doh. Ok, so the IP address is assigned by DHCP, but it seems there is already a device statically assigned to that address. I'll put in a reservation and try again. ok, sorted now. It was an ip address conflict. Windows DHCP isn't smart enough to check if there is something listening on the address before first assigning it.

    Read the article

  • How to preserve data integrity while minimizing the transmission size

    - by user1500578
    we have sensors in the wild that send their data to a server every day via TCP/IP, either through 3G or through satellite for the physical layer. The sensors can automatically switch from one to the other depending on their location and the quality of the signal with the local 3G operator. Given that the 3G and satellite communications are very expensive, we want to minimize the amount of data to send. But also, we want to protect ourselves from lost data. What would be the best strategy to ensure with reasonable certainty that the integrity of our data is preserved, while minimizing the amount of redundancy, i.e the amount of data transmitted ? I've read about the zfec codec, but I'm not sure if we need to transmit all the chunks, or if we need to send a hash code along each chunk.

    Read the article

  • Comparison of collection datatypes in C#

    - by Joel in Gö
    Does anyone know of a good overview of the different C# collection types? I am looking for something showing which basic operations such as Add, Remove, RemoveLast etc. are supported, and giving the relative performance. It would be particularly interesting for the various generic classes - and even better if it showed eg. if there is a difference in performance between a List<T> where T is a class and one where T is a struct. A start would be a nice cheat-sheet for the abstract data structures, comparing Linked Lists, Hash Tables etc. etc. Thanks!

    Read the article

  • Why use hashing to create pathnames for large collections of files?

    - by Stephen
    Hi, I noticed a number of cases where an application or database stored collections of files/blobs using a has to determine the path and filename. I believe the intended outcome is a situation where the path never gets too deep, or the folders ever get too full - too many files (or folders) in a folder making for slower access. EDIT: Examples are often Digital libraries or repositories, though the simplest example I can think of (that can be installed in about 30s) is the Zotero document/citation database. Why do this? EDIT: thanks Mat for the answer - does this technique of using a hash to create a file path have a name? Is it a pattern? I'd like to read more, but have failed to find anything in the ACM Digital Library

    Read the article

  • When does ref($variable) return 'IO'?

    - by Zaid
    Here's the relevant excerpt from the documentation of the ref function: The value returned depends on the type of thing the reference is a reference to. Builtin types include: SCALAR ARRAY HASH CODE REF GLOB LVALUE FORMAT IO VSTRING Regexp Based on this, I imagined that calling ref on a filehandle would return 'IO'. Surprisingly, it doesn't: use strict; use warnings; open my $fileHandle, '<', 'aValidFile'; close $fileHandle; print ref $fileHandle; # prints 'GLOB', not 'IO' perlref tries to explain why: It isn't possible to create a true reference to an IO handle (filehandle or dirhandle) using the backslash operator. The most you can get is a reference to a typeglob, which is actually a complete symbol table entry [...] However, you can still use type globs and globrefs as though they were IO handles. In what circumstances would ref return 'IO' then?

    Read the article

  • Data Structure for a particular problem??

    - by AGeek
    Hi, Which data structure can perform insertion, deletion and searching operation in O(1) time in the worst case. We may assume the set of elements are integers drawn from a finite set 1,2,...,n, and initialization can take O(n) time. I can only think of implementing a hash table. Implementing it with Trees will not give O(1) time complexity for any of the operation. Or is it possible?? Kindly share your views on this, or any other data structure apart from these.. Thanks..

    Read the article

  • List of fundamental data structures - what am I missing?

    - by jboxer
    I've been studying my fundamental data structures a bunch recently, trying to make sure I've got them down cold. By "fundamental", I mean the real basic ones. Fancy ones like Red-Black Trees and Bloom Filters are clearly worth knowing, but they're usually either enhancements of fundamental ones (Red-Black Trees are binary search trees with special properties to keep them balanced) or they're only useful in very specific situations (Bloom Filters). So far, I'm "fluent" in the following data structures: Arrays Linked Lists Stacks/Queues Binary Search Trees Heaps/Priority Queues Hash Tables However, I feel like I'm missing something. Are there any fundamental ones that I'm forgetting about? EDIT: Added these after posting the question Strings (suggested by catchmeifyoutry) Sets (suggested by Peter) Graphs (suggested by Nick D and aJ) B-Trees (Suggested by tloach) I'm a little on-the-fence about whether these are too fancy or not, but I think they're different enough from the fundamental structures (and important enough) to be worth studying as fundamental.

    Read the article

  • c# multi threaded file processing

    - by user177883
    There is a folder that contains 1000 of small text files. I aim to parse and process all of them while more files are being populated in to the folder. My intention is to multithread this operation as the single threaded prototype took 6 minutes to process 1000 files. I like to have reader and writer thread(s) as following : while the reader thread(s) are reading the files, I d like to have writer thread(s) to process them. Once the reader is started reading a file, I d like to mark it as being processed, such as by renaming it, once it s read, rename it to completed. How to approach such multithreaded application ? Is it better to use a distributed hash table or a queue? Which data structure to use that would avoid locks? Would you have a better approach to this scheme that you like to share?

    Read the article

  • how to sort JTable by providing column index externally.

    - by user345940
    I would like to implement sorting on JTable by providing column index externally in program. Here is my sample code in which i have initialize JTable, Add one Column and 30 rows to JTable. After rows has been added i am sorting JTable by providing column index 0 but i could not get sorted data. how can i get my first column in sorted order? what's wrong with my code. **Why sortCTableonColumnIndex() method could not sort data for specify column index? ` public class Test { private JTable oCTable; private DefaultTableModel oDefaultTableModel; private JScrollPane oPane; private JTableHeader oTableHeader; private TableRowSorter sorter; public void adddata() { for (int i = 0; i < 30; i++) { Object[] row = new Object[1]; String sValueA = "A"; String sValueB = "A"; row[0] = ""; if (i % 2 == 0) { if (i < 15) { sValueA = sValueA + sValueA; row[1] = sValueA; } else { if (i == 16) { sValueB = "D"; row[1] = sValueA; } else { sValueB = sValueB + sValueB; row[1] = sValueA; } } } else { if (i < 15) { sValueB = sValueB + sValueB; row[1] = sValueB; } else { if (i == 17) { sValueB = "C"; row[1] = sValueB; } else { sValueB = sValueB + sValueB; row[1] = sValueB; } } } } } public void createTable() { oCTable = new JTable(); oDefaultTableModel = new DefaultTableModel(); oCTable.setModel(oDefaultTableModel); oTableHeader = oCTable.getTableHeader(); oCTable.setAutoResizeMode(oCTable.AUTO_RESIZE_OFF); oCTable.setFillsViewportHeight(true); JTable oTable = new LineNumberTable(oCTable); oPane = new JScrollPane(oCTable); oPane.setRowHeaderView(oTable); JPanel oJPanel = new JPanel(); oJPanel.setLayout(new BorderLayout()); oJPanel.add(oPane, BorderLayout.CENTER); JDialog oDialog = new JDialog(); oDialog.add(oJPanel); oDialog.setPreferredSize(new Dimension(500, 300)); oDialog.pack(); oDialog.setVisible(true); } public void insert() { oDefaultTableModel.addColumn("Name"); int iColumnPlace = ((DefaultTableModel) oCTable.getModel()).findColumn("Name"); CellRendererForRowHeader oCellRendererForRowHeader = new CellRendererForRowHeader(); TableColumn Column = oCTable.getColumn(oTableHeader.getColumnModel().getColumn(iColumnPlace).getHeaderValue()); Column.setPreferredWidth(300); Column.setMaxWidth(300); Column.setMinWidth(250); Column.setCellRenderer(oCellRendererForRowHeader); for (int i = 0; i < 30; i++) { Object[] row = new Object[1]; String sValueA = "A"; if (i % 2 == 0) { if (i < 15) { sValueA = sValueA + "a"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } else { if (i == 16) { sValueA = sValueA + "b"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } else { sValueA = sValueA + "c"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } } } else { if (i < 15) { sValueA = sValueA + "d"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } else { if (i == 17) { sValueA = sValueA + "e"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } else { sValueA = sValueA + "f"; oDefaultTableModel.insertRow(oCTable.getRowCount(), new Object[]{""}); oDefaultTableModel.setValueAt(sValueA, i, 0); } } } } } public void showTable() { createTable(); insert(); sortCTableonColumnIndex(0, true); } public void sortCTableonColumnIndex(int iColumnIndex, boolean bIsAsc) { sorter = new TableRowSorter(oDefaultTableModel); List<RowSorter.SortKey> sortKeys = new ArrayList<RowSorter.SortKey>(); if (bIsAsc) { sortKeys.add(new RowSorter.SortKey(iColumnIndex, SortOrder.ASCENDING)); } else { sortKeys.add(new RowSorter.SortKey(iColumnIndex, SortOrder.DESCENDING)); } sorter.setSortKeys(sortKeys); oDefaultTableModel.fireTableStructureChanged(); oCTable.updateUI(); } public static void main(String[] argu) { Test oTest = new Test(); oTest.showTable(); } class CellRendererForRowHeader extends DefaultTableCellRenderer { public Component getTableCellRendererComponent(JTable table, Object value, boolean isSelected, boolean hasFocus, int row, int column) { JLabel label = null; try { label = (JLabel) super.getTableCellRendererComponent(table, value, isSelected, hasFocus, row, column); if (column == 0) { label.setBackground(new JLabel().getBackground()); label.setForeground(Color.BLACK); } } catch (RuntimeException ex) { } return label; } } class LineNumberTable extends JTable { private JTable mainTable; public LineNumberTable(JTable table) { super(); mainTable = table; setAutoCreateColumnsFromModel(false); setModel(mainTable.getModel()); setAutoscrolls(false); addColumn(new TableColumn()); getColumnModel().getColumn(0).setCellRenderer(mainTable.getTableHeader().getDefaultRenderer()); getColumnModel().getColumn(0).setPreferredWidth(40); setPreferredScrollableViewportSize(getPreferredSize()); } @Override public boolean isCellEditable(int row, int column) { return false; } @Override public Object getValueAt(int row, int column) { return Integer.valueOf(row + 1); } @Override public int getRowHeight(int row) { return mainTable.getRowHeight(); } } } `

    Read the article

  • pipelined function

    - by user289429
    Can someone provide an example of how to use parallel table function in oracle pl/sql. We need to run massive queries for 15 years and combine the result. SELECT * FROM Table(TableFunction(cursor(SELECT * FROM year_table))) ...is what we want effectively. The innermost select will give all the years, and the table function will take each year and run massive query and returns a collection. The problem we have is that all years are being fed to one table function itself, we would rather prefer the table function being called in parallel for each of the year. We tried all sort of partitioning by hash and range and it didn't help. Also, can we drop the keyword PIPELINED from the function declaration? because we are not performing any transformation and just need the aggregate of the resultset.

    Read the article

  • Criticize my code, please

    - by Micky
    Hey, I was applying for a position, and they asked me to complete a coding problem for them. I did so and submitted it, but I later found out I was rejected from the position. Anyways, I have an eclectic programming background so I'm not sure if my code is grossly wrong or if I just didn't have the best solution out there. I would like to post my code and get some feedback about it. Before I do, here's a description of a problem: You are given a sorted array of integers, say, {1, 2, 4, 4, 5, 8, 9, 9, 9, 9, 9, 9, 10, 10, 10, 11, 13 }. Now you are supposed to write a program (in C or C++, but I chose C) that prompts the user for an element to search for. The program will then search for the element. If it is found, then it should return the first index the entry was found at and the number of instances of that element. If the element is not found, then it should return "not found" or something similar. Here's a simple run of it (with the array I just put up): Enter a number to search for: 4 4 was found at index 2. There are 2 instances for 4 in the array. Enter a number to search for: -4. -4 is not in the array. They made a comment that my code should scale well with large arrays (so I wrote up a binary search). Anyways, my code basically runs as follows: Prompts user for input. Then it checks if it is within bounds (bigger than a[0] in the array and smaller than the largest element of the array). If so, then I perform a binary search. If the element is found, then I wrote two while loops. One while loop will count to the left of the element found, and the second while loop will count to the right of the element found. The loops terminate when the adjacent elements do not match with the desired value. EX: 4, 4, 4, 4, 4 The bold 4 is the value the binary search landed on. One loop will check to the left of it, and another loop will check to the right of it. Their sum will be the total number of instances of the the number four. Anyways, I don't know if there are any advanced techniques that I am missing or if I just don't have the CS background and made a big error. Any constructive critiques would be appreciated! #include <stdio.h> #include <stdlib.h> #include <string.h> #include <stddef.h> /* function prototype */ int get_num_of_ints( const int* arr, size_t r, int N, size_t* first, size_t* count ); int main() { int N; /* input variable */ int arr[]={1,1,2,3,3,4,4,4,4,5,5,7,7,7,7,8,8,8,9,11,12,12}; /* array of sorted integers */ size_t r = sizeof(arr)/sizeof(arr[0]); /* right bound */ size_t first; /* first match index */ size_t count; /* total number of matches */ /* prompts the user to enter input */ printf( "\nPlease input the integer you would like to find.\n" ); scanf( "%d", &N ); int a = get_num_of_ints( arr, r, N, &first, &count ); /* If the function returns -1 then the value is not found. Else it is returned */ if( a == -1) printf( "%d has not been found.\n", N ); else if(a >= 0){ printf( "The first matching index is %d.\n", first ); printf( "The total number of instances is %d.\n", count ); } return 0; } /* function definition */ int get_num_of_ints( const int* arr, size_t r, int N, size_t* first, size_t* count ) { int lo=0; /* lower bound for search */ int m=0; /* middle value obtained */ int hi=r-1; /* upper bound for search */ int w=r-1; /* used as a fixed upper bound to calculate the number of right instances of a particular value. */ /* binary search to find if a value exists */ /* first check if the element is out of bounds */ if( N < arr[0] || arr[hi] < N ){ m = -1; } else{ /* binary search to find a value, if it exists, within given parameters */ while(lo <= hi){ m = (hi + lo)/2; if(arr[m] < N) lo = m+1; else if(arr[m] > N) hi = m-1; else if(arr[m]==N){ m=m; break; } } if (lo > hi) /* if it doesn't we assign it -1 */ m = -1; } /* If the value is found, then we compute the left and right instances of it */ if( m >= 0 ){ int j = m-1; /* starting with the first term to the left */ int L = 0; /* total number of left instances */ /* while loop computes total number of left instances */ while( j >= 0 && arr[j] == arr[m] ){ L++; j--; } /* There are six possible outcomes of this. Depending on the outcome, we must assign the first index variable accordingly */ if( j > 0 && L > 0 ) *first=j+1; else if( j==0 && L==0) *first=m; else if( j > 0 && L==0 ) *first=m; else if(j < 0 && L==0 ) *first=m; else if( j < 0 && L > 0 ) *first=0; else if( j=0 && L > 0 ) *first=j+1; int h = m + 1; /* starting with the first term to the right */ int R = 0; /* total number of right instances */ /* while loop computes total number of right instances */ /* we fixed w earlier so that it's value does not change */ while( arr[h]==arr[m] && h <= w ){ R++; h++; } *count = (R + L + 1); /* total number of instances stored as value of count */ return *first; /* first instance index stored here */ } /* if value does not exist, then we return a negative value */ else if( m==-1) return -1; }

    Read the article

< Previous Page | 105 106 107 108 109 110 111 112 113 114 115 116  | Next Page >