Search Results

Search found 2263 results on 91 pages for 'qantas 94 heavy'.

Page 9/91 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • apache-memory-hacker-linux

    - by bibhudatta
    When we start the linux system it take only 435mb memory and it is 4GB memory server. When we start the httpd services it take 1000mb and outmatically it take all the memory and the server crase. even we stop the apache just it release 200mb memory. What will be the problem Can any one tell me what these hacker are doing. I see they are goinging some hit to my apache by some but I thing they are doing from this system. Below is the log. Please help me out for this. [root@host ~]# tail -20 /var/log/httpd/dostizone.com-combined.log 180.76.5.143 - - [14/Nov/2011:02:30:16 +0530] "GET /blogs/10248/209403/nfl-panties-since-the-quality-of HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 180.76.5.88 - - [14/Nov/2011:02:30:31 +0530] "GET /blogs/815/158725/new-jersey-attorney-search HTTP/1.1" 403 2290 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" 220.181.108.186 - - [14/Nov/2011:02:30:32 +0530] "GET / HTTP/1.1" 403 5043 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:30:20 +0530] "GET /blogs/805/11279/supra-suprano-high-shoes HTTP/1.1" 200 30642 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:37 +0530] "GET /blogs/10514/215084/oakland-raiders-sweatpants-tags HTTP/1.1" 403 2297 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:12 +0530] "GET /profile/8509 HTTP/1.1" 200 236894 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 220.181.94.237 - - [14/Nov/2011:02:30:43 +0530] "GET /mode-switch?return_url=%2Fblogs%2F8529%2F160217%2Fclimate-jordan-6 HTTP/1.1" 302 1 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:44 +0530] "GET /blogs/390/61573/blackhawk-jerseys-from-the-you HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/core.js HTTP/1.1" 200 26869 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Activity/externals/scripts/core.js HTTP/1.1" 200 26873 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 124.115.0.159 - - [14/Nov/2011:02:30:24 +0530] "GET /blogs/693/46081/application/modules/Hecore/externals/scripts/imagezoom/core.js HTTP/1.1" 200 26899 "http://dostizone.com/blogs/693/46081/thomas-sabo-charms-hot-chilli" "Sosospider+(+http://help.soso.com/webspider.htm)" 180.76.5.153 - - [14/Nov/2011:02:30:50 +0530] "GET /blogs/10252/212268/cleveland-browns-authentic-jerse HTTP/1.1" 403 2298 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:51 +0530] "GET /blogs/741/46260/chocolate-ugg-women-boots-1873 HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 124.115.1.7 - - [14/Nov/2011:02:30:40 +0530] "GET /blogs/682/97454/swarovski-jewellry-sale-articles HTTP/1.1" 200 25770 "-" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:30:56 +0530] "GET /blogs/779/60941/players-a-to-z-michael-cuddyer HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:01 +0530] "GET /blogs/469/58551/chicago-bears-news-there-exist HTTP/1.1" 403 2293 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" 220.181.94.237 - - [14/Nov/2011:02:30:54 +0530] "GET /blogs/8529/160217/climate-jordan-6 HTTP/1.1" 200 30750 "-" "Sogou web spider/4.0(+http://www.sogou.com/docs/help/webmasters.htm#07)" 180.76.5.59 - - [14/Nov/2011:02:31:05 +0530] "GET /blogs/815/158197/cheap-calgary-flames-jerseys HTTP/1.1" 403 2292 "-" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" crawl-66-249-68-51.googlebot.com - - [14/Nov/2011:02:31:06 +0530] "GET /mode-switch?return_url=%2Fblogs%2F387%2F45679%2Fhandbag-louis-vuitton-judy-mm-m4 HTTP/1.1" 403 2258 "-" "SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)" crawl-66-249-67-137.googlebot.com - - [14/Nov/2011:02:31:10 +0530] "GET /public/temporary/c83b731ecc556d7fd1a7732d9ac16ed6.png HTTP/1.1" 404 2305 "-" "Googlebot-Image/1

    Read the article

  • Unicast traffic between hosts on a switch leaving the switch by its uplink. Why?

    - by Rich Lafferty
    I have a weird thing happening on our network at my office which I can't quite get my head around. In particular I can't tell if it's a problem with a switch, or a problem with configuration. We have a Cisco SG300-52 switch (sw01) in the top of a rack in our server room, connected to another SG300-28 that acts as our core switch (core01). Both run layer 2 only, our firewalls do routing between VLANs. They have a dozen or so VLANs between them. Gi1 on sw01 is a trunk port connected to gi1 on core01. (Disclosure: There are other switches in our environment but I'm pretty sure I've isolated the problem down to these two. Happy to provide more info if necessary.) The behaviour I'm seeing is limited to one VLAN, vlan 12 -- or, at least, it's not happening on the other ones I checked (It's hard to guarantee the absence of packets), and it is: sw01 is forwarding, to core01, traffic which is between two hosts which are both plugged into sw01. (I noticed this because the IDS in our firewall gave a false positive on traffic which should not reach the firewall.) We noticed this mostly between our two dhcp/dns servers, net01 (10.12.0.10) and net02 (10.12.0.11). net01 is physical hardware and net02 is on a VMware ESX server. net01 is connected to gi44 on sw01 and net02's ESX server to gi11. [net01]----gi44-[sw01]-gi1----gi1-[core01] [net02]----gi11/ Let's see some interfaces! Remember, vlan 12 is the problem vlan. Of the others I explicitly verified that vlan 27 was not affected. Here's the two hosts' ports: esx01 contains net02. sw01#sh run int gi11 interface gigabitethernet11 description esx01 lldp med disable switchport trunk allowed vlan add 5-7,11-13,100 switchport trunk native vlan 27 ! sw01#sh run int gi44 interface gigabitethernet44 description net01-1 lldp med disable switchport mode access switchport access vlan 12 ! Here's the trunk on sw01. sw01#sh run int gi1 interface gigabitethernet1 description "trunk to core01" lldp med disable switchport trunk allowed vlan add 4-7,11-13,27,100 ! And the other end of the trunk on core01. interface gigabitethernet1 description sw01 macro description switch switchport trunk allowed vlan add 2-7,11-16,27,100 ! I have a monitor port on core01, thus: core01#sh run int gi12 interface gigabitethernet12 description "monitor port" port monitor GigabitEthernet 1 ! And the monitor port on core01 sees unicast traffic going between net01 and net02, both of which are on sw01! I've verified this with a monitor port on sw01 that sees the net01-net02 unicast traffic leaving via gi1 too. sw01 knows that both of those hosts are on ports that are not its trunk port: :) ratchet$ arp -a | grep net net02.2ndsiteinc.com (10.12.0.11) at 00:0C:29:1A:66:15 [ether] on eth0 net01.2ndsiteinc.com (10.12.0.10) at 00:11:43:D8:9F:94 [ether] on eth0 sw01#sh mac addr addr 00:0C:29:1A:66:15 Aging time is 300 sec Vlan Mac Address Port Type -------- --------------------- ---------- ---------- 12 00:0c:29:1a:66:15 gi11 dynamic sw01#sh mac addr addr 00:11:43:D8:9F:94 Aging time is 300 sec Vlan Mac Address Port Type -------- --------------------- ---------- ---------- 12 00:11:43:d8:9f:94 gi44 dynamic I also brought up an unused port on sw01 on vlan 12, but the unicast traffic was (as best as I could tell) not coming out that port. So it doesn't look like sw01 is pushing it out all its ports, just the right ports and also gi1! I've verified that sw01 is not filling up its address-table: sw01#sh mac addr count This may take some time. Capacity : 8192 Free : 7983 Used : 208 The full configs for both core01 and sw01 are available: core01, sw01. Finally, versions: sw01#sh ver SW version 1.1.2.0 ( date 12-Nov-2011 time 23:34:26 ) Boot version 1.0.0.4 ( date 08-Apr-2010 time 16:37:57 ) HW version V01 core01#sh ver SW version 1.1.2.0 ( date 12-Nov-2011 time 23:34:26 ) Boot version 1.1.0.6 ( date 11-May-2011 time 18:31:00 ) HW version V01 So my understanding is this: sw01 should take unicast traffic for net01 and send it only out net02's port, and vice versa; none of it should go out sw01's uplink. But core01, receiving traffic on gi1 for a host it knows is on gi1, is right in sending it out all of its ports. (That is: sw01 is misbehaving, but core01 is doing what it should given the circumstances.) My question is: Why is sw01 sending that unicast traffic out its uplink, gi1? (And pre-emptively: yes, I know SG300s leave much to be desired, and yes, we should have spanning-tree enabled, but that's where I'm at right now.)

    Read the article

  • Making a Statement: How to retrieve the T-SQL statement that caused an event

    - by extended_events
    If you’ve done any troubleshooting of T-SQL, you know that sooner or later, probably sooner, you’re going to want to take a look at the actual statements you’re dealing with. In extended events we offer an action (See the BOL topic that covers Extended Events Objects for a description of actions) named sql_text that seems like it is just the ticket. Well…not always – sounds like a good reason for a blog post. When is a statement not THE statement? The sql_text action returns the same information that is returned from DBCC INPUTBUFFER, which may or may not be what you want. For example, if you execute a stored procedure, the sql_text action will return something along the lines of “EXEC sp_notwhatiwanted” assuming that is the statement you sent from the client. Often times folks would like something more specific, like the actual statements that are being run from within the stored procedure or batch. Enter the stack Extended events offers another action, this one with the descriptive name of tsql_stack, that includes the sql_handle and offset information about the statements being run when an event occurs. With the sql_handle and offset values you can retrieve the specific statement you seek using the DMV dm_exec_sql_statement. The BOL topic for dm_exec_sql_statement provides an example for how to extract this information, so I’ll cover the gymnastics required to get the sql_handle and offset values out of the tsql_stack data collected by the action. I’m the first to admit that this isn’t pretty, but this is what we have in SQL Server 2008 and 2008 R2. We will be making it easier to get statement level information in the next major release of SQL Server. The sample code For this example I have a stored procedure that includes multiple statements and I have a need to differentiate between those two statements in my tracing. I’m going to track two events: module_end tracks the completion of the stored procedure execution and sp_statement_completed tracks the execution of each statement within a stored procedure. I’m adding the tsql_stack events (since that’s the topic of this post) and the sql_text action for comparison sake. (If you have questions about creating event sessions, check out Pedro’s post Introduction to Extended Events.) USE AdventureWorks2008GO -- Test SPCREATE PROCEDURE sp_multiple_statementsASSELECT 'This is the first statement'SELECT 'this is the second statement'GO -- Create a session to look at the spCREATE EVENT SESSION track_sprocs ON SERVERADD EVENT sqlserver.module_end (ACTION (sqlserver.tsql_stack, sqlserver.sql_text)),ADD EVENT sqlserver.sp_statement_completed (ACTION (sqlserver.tsql_stack, sqlserver.sql_text))ADD TARGET package0.ring_bufferWITH (MAX_DISPATCH_LATENCY = 1 SECONDS)GO -- Start the sessionALTER EVENT SESSION track_sprocs ON SERVERSTATE = STARTGO -- Run the test procedureEXEC sp_multiple_statementsGO -- Stop collection of events but maintain ring bufferALTER EVENT SESSION track_sprocs ON SERVERDROP EVENT sqlserver.module_end,DROP EVENT sqlserver.sp_statement_completedGO Aside: Altering the session to drop the events is a neat little trick that allows me to stop collection of events while keeping in-memory targets such as the ring buffer available for use. If you stop the session the in-memory target data is lost. Now that we’ve collected some events related to running the stored procedure, we need to do some processing of the data. I’m going to do this in multiple steps using temporary tables so you can see what’s going on; kind of like having to “show your work” on a math test. The first step is to just cast the target data into XML so I can work with it. After that you can pull out the interesting columns, for our purposes I’m going to limit the output to just the event name, object name, stack and sql text. You can see that I’ve don a second CAST, this time of the tsql_stack column, so that I can further process this data. -- Store the XML data to a temp tableSELECT CAST( t.target_data AS XML) xml_dataINTO #xml_event_dataFROM sys.dm_xe_sessions s INNER JOIN sys.dm_xe_session_targets t    ON s.address = t.event_session_addressWHERE s.name = 'track_sprocs' SELECT * FROM #xml_event_data -- Parse the column data out of the XML blockSELECT    event_xml.value('(./@name)', 'varchar(100)') as [event_name],    event_xml.value('(./data[@name="object_name"]/value)[1]', 'varchar(255)') as [object_name],    CAST(event_xml.value('(./action[@name="tsql_stack"]/value)[1]','varchar(MAX)') as XML) as [stack_xml],    event_xml.value('(./action[@name="sql_text"]/value)[1]', 'varchar(max)') as [sql_text]INTO #event_dataFROM #xml_event_data    CROSS APPLY xml_data.nodes('//event') n (event_xml) SELECT * FROM #event_data event_name object_name stack_xml sql_text sp_statement_completed NULL <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="4" offsetStart="94" offsetEnd="172" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements sp_statement_completed NULL <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="6" offsetStart="174" offsetEnd="-1" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements module_end sp_multiple_statements <frame level="1" handle="0x03000500D0057C1403B79600669D00000100000000000000" line="0" offsetStart="0" offsetEnd="0" /><frame level="2" handle="0x01000500CF3F0331B05EC084000000000000000000000000" line="1" offsetStart="0" offsetEnd="-1" /> EXEC sp_multiple_statements After parsing the columns it’s easier to see what is recorded. You can see that I got back two sp_statement_completed events, which makes sense given the test procedure I’m running, and I got back a single module_end for the entire statement. As described, the sql_text isn’t telling me what I really want to know for the first two events so a little extra effort is required. -- Parse the tsql stack information into columnsSELECT    event_name,    object_name,    frame_xml.value('(./@level)', 'int') as [frame_level],    frame_xml.value('(./@handle)', 'varchar(MAX)') as [sql_handle],    frame_xml.value('(./@offsetStart)', 'int') as [offset_start],    frame_xml.value('(./@offsetEnd)', 'int') as [offset_end]INTO #stack_data    FROM #event_data        CROSS APPLY    stack_xml.nodes('//frame') n (frame_xml)    SELECT * from #stack_data event_name object_name frame_level sql_handle offset_start offset_end sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 94 172 sp_statement_completed NULL 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 174 -1 sp_statement_completed NULL 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 module_end sp_multiple_statements 1 0x03000500D0057C1403B79600669D00000100000000000000 0 0 module_end sp_multiple_statements 2 0x01000500CF3F0331B05EC084000000000000000000000000 0 -1 Parsing out the stack information doubles the fun and I get two rows for each event. If you examine the stack from the previous table, you can see that each stack has two frames and my query is parsing each event into frames, so this is expected. There is nothing magic about the two frames, that’s just how many I get for this example, it could be fewer or more depending on your statements. The key point here is that I now have a sql_handle and the offset values for those handles, so I can use dm_exec_sql_statement to get the actual statement. Just a reminder, this DMV can only return what is in the cache – if you have old data it’s possible your statements have been ejected from the cache. “Old” is a relative term when talking about caches and can be impacted by server load and how often your statement is actually used. As with most things in life, your mileage may vary. SELECT    qs.*,     SUBSTRING(st.text, (qs.offset_start/2)+1,         ((CASE qs.offset_end          WHEN -1 THEN DATALENGTH(st.text)         ELSE qs.offset_end         END - qs.offset_start)/2) + 1) AS statement_textFROM #stack_data AS qsCROSS APPLY sys.dm_exec_sql_text(CONVERT(varbinary(max),sql_handle,1)) AS st event_name object_name frame_level sql_handle offset_start offset_end statement_text sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 94 172 SELECT 'This is the first statement' sp_statement_completed NULL 1 0x03000500D0057C1403B79600669D00000100000000000000 174 -1 SELECT 'this is the second statement' module_end sp_multiple_statements 1 0x03000500D0057C1403B79600669D00000100000000000000 0 0 C Now that looks more like what we were after, the statement_text field is showing the actual statement being run when the sp_statement_completed event occurs. You’ll notice that it’s back down to one row per event, what happened to frame 2? The short answer is, “I don’t know.” In SQL Server 2008 nothing is returned from dm_exec_sql_statement for the second frame and I believe this to be a bug; this behavior has changed in the next major release and I see the actual statement run from the client in frame 2. (In other words I see the same statement that is returned by the sql_text action  or DBCC INPUTBUFFER) There is also something odd going on with frame 1 returned from the module_end event; you can see that the offset values are both 0 and only the first letter of the statement is returned. It seems like the offset_end should actually be –1 in this case and I’m not sure why it’s not returning this correctly. This behavior is being investigated and will hopefully be corrected in the next major version. You can workaround this final oddity by ignoring the offsets and just returning the entire cached statement. SELECT    event_name,    sql_handle,    ts.textFROM #stack_data    CROSS APPLY sys.dm_exec_sql_text(CONVERT(varbinary(max),sql_handle,1)) as ts event_name sql_handle text sp_statement_completed 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' sp_statement_completed 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' module_end 0x0300070025999F11776BAF006F9D00000100000000000000 CREATE PROCEDURE sp_multiple_statements AS SELECT 'This is the first statement' SELECT 'this is the second statement' Obviously this gives more than you want for the sp_statement_completed events, but it’s the right information for module_end. I leave it to you to determine when this information is needed and use the workaround when appropriate. Aside: You might think it’s odd that I’m showing apparent bugs with my samples, but you’re going to see this behavior if you use this method, so you need to know about it.I’m all about transparency. Happy Eventing- Mike Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Extend Your Applications Your Way: Oracle OpenWorld Live Poll Results

    - by Applications User Experience
    Lydia Naylor, Oracle Applications User Experience Manager At OpenWorld 2012, I attended one of our team’s very exciting sessions: “Extend Your Applications, Your Way”. It was clear that customers were engaged by the topics presented. Not only did we see many heads enthusiastically nodding in agreement during the presentation, and witness a large crowd surround our speakers Killian Evers, Kristin Desmond and Greg Nerpouni afterwards, but we can prove it…with data! Figure 1. Killian Evers, Kristin Desmond, and Greg Nerpouni of Oracle at the OOW 2012 session. At the beginning of our OOW 2012 journey, Greg Nerpouni, Fusion HCM Principal Product Manager, told me he really wanted to get feedback from the audience on our extensibility direction. Initially, we were thinking of doing a group activity at the OOW UX labs events that we hold every year, but Greg was adamant- he wanted “real-time” feedback. So, after a little tinkering, we came up with a way to use an online survey tool, a simple QR code (Quick Response code: a matrix barcode that can include information like URLs and can be read by mobile device cameras), and the audience’s mobile devices to do just that. Figure 2. Actual QR Code for survey Prior to the session, we developed a short survey in Vovici (an online survey tool), with questions to gather feedback on certain points in the presentation, as well as demographic data from our participants. We used Vovici’s feature to generate a mobile HTML version of the survey. At the session, attendees accessed the survey by simply scanning a QR code or typing in a TinyURL (a shorthand web address that is easily accessible through mobile devices). Killian, Kristin and Greg paused at certain points during the session and asked participants to answer a few survey questions about what they just presented. Figure 3. Session survey deployed on a mobile phone The nice thing about Vovici’s survey tool is that you can see the data real-time as participants are responding to questions - so we knew during the session that not only was our direction on track but we were hitting the mark and fulfilling Greg’s request. We planned on showing the live polling results to the audience at the end of the presentation but it ran just a little over time, and we were gently nudged out of the room by the session attendants. We’ve included a quick summary below and this link to the full results for your enjoyment. Figure 4. Most important extensions to Fusion Applications So what did participants think of our direction for extensibility? A total of 94% agreed that it was an improvement upon their current process. The vast majority, 80%, concurred that the extensibility model accounts for the major roles involved: end user, business systems analyst and programmer. Attendees suggested a few supporting roles such as systems administrator, data architect and integrator. Customers and partners in the audience verified that Oracle‘s Fusion Composers allow them to make changes in the most common areas they need to: user interface, business processes, reporting and analytics. Integrations were also suggested. All top 10 things customers can do on a page rated highly in importance, with all but two getting an average rating above 4.4 on a 5 point scale. The kinds of layout changes our composers allow customers to make align well with customers’ needs. The most common were adding columns to a table (94%) and resizing regions and drag and drop content (both selected by 88% of participants). We want to thank the attendees of the session for allowing us another great opportunity to gather valuable feedback from our customers! If you didn’t have a chance to attend the session, we will provide a link to the OOW presentation when it becomes available.

    Read the article

  • What to sign when signing a message with ws-security

    - by Heavy Bytes
    I am adding security to my web service and chose to sign the Timestamp and Token. While reading docs I found a lot of examples where they sign the Body of the SOAP message. My question is: what is best to sign? From what I understand signing the Body could lead to performance issues if the Body is pretty large. Thanks.

    Read the article

  • Websphere 7 simple realm (like tomcat-users.xml)

    - by Heavy Bytes
    I am trying to port a J2EE app from Tomcat to Websphere and I'm not too familiar with Websphere. The only problem I am having is authorization (I use basic-authentication in my web.xml). In Tomcat I use the tomcat-users.xml file to define my users/passwords and to what roles they belong. How do I do this "simply" in Websphere? When deploying the EAR to Websphere it also asks me to map my role from web.xml to a user or group. Do I have to set up some sort of realm? Custom user registry? Thanks. UPDATE: I configured a Standalone custom registry, however I can't get a log-in prompt for username/password. It works just fine in Tomcat, and it doesn't in Websphere. Code from web.xml <security-constraint> <web-resource-collection> <web-resource-name>basic-auth security</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <role-name>HELLO_USER</role-name> </auth-constraint> <user-data-constraint>NONE</user-data-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> </login-config> <security-role> <role-name>HELLO_USER</role-name> </security-role>

    Read the article

  • Javascript Onclick Problem with Table Rows

    - by Shane Larson
    Hello. I am having problems with my JScript code. I am trying to loop through all of the rows in a table and add an onclick event. I can get the onclick event to add but have a couple of problems. The first problem is that all rows end up getting set up with the wrong parameter for the onclick event. The second problem is that it only works in IE. Here is the code excerpt... shanesObj.addTableEvents = function(){ table = document.getElementById("trackerTable"); for(i=1; i<table.getElementsByTagName("tr").length; i++){ row = table.getElementsByTagName("tr")[i]; orderID = row.getAttributeNode("id").value; alert("before onclick: " + orderID); row.onclick=function(){shanesObj.tableRowEvent(orderID);}; }} shanesObj.tableRowEvent = function(orderID){ alert(orderID);} The table is located at the following location... http://www.blackcanyonsoftware.com/OrderTracker/testAJAX.html The id's of each row in sequence are... 95, 96, 94... For some reason, when shanesObj.tableRowEvent is called, the onclick is set up for all rows with the last value id that went through iteration on the loop (94). I added some alerts to the page to illustrate the problem. Thanks. Shane

    Read the article

  • AndEngine Physics Editor loading level

    - by Khawar Raza
    I have created a .pes file using PhysicsEditor and imported as xml and have added to my project. When I parsed it and created bodies, it is showing strange behavior. The mapping of bodies that I created in PhysicsEditor is totally different what I see in my application means the shapes I draw in PhysicsEditor are rendering differently in my app. Here is my xml and code to parse and add bodies to scene. PhysicsEditor XML file: <?xml version="1.0" encoding="UTF-8"?> <!-- created with http://www.physicseditor.de --> <bodydef version="1.0"> <bodies numBodies="1"> <body name="car_path" dynamic="false" numFixtures="1"> <fixture density="2" friction="1" restitution="0" filter_categoryBits="1" filter_groupIndex="0" filter_maskBits="65535" isSensor="false" type="POLYGON" numPolygons="20" > <polygon numVertexes="6"> <vertex x="277.0000" y="152.0000" /> <vertex x="356.0000" y="172.0000" /> <vertex x="413.0000" y="194.0000" /> <vertex x="476.0000" y="223.0000" /> <vertex x="173.0000" y="232.0000" /> <vertex x="174.0000" y="148.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="1556.0000" y="221.0000" /> <vertex x="1142.0000" y="94.0000" /> <vertex x="1255.0000" y="-15.0000" /> <vertex x="1554.0000" y="-14.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="-192.0000" y="177.0000" /> <vertex x="-888.0000" y="139.0000" /> <vertex x="-549.0000" y="-125.0000" /> </polygon> <polygon numVertexes="6"> <vertex x="1762.0000" y="24.0000" /> <vertex x="1862.0000" y="27.0000" /> <vertex x="1927.0000" y="68.0000" /> <vertex x="2078.0000" y="222.0000" /> <vertex x="1643.0000" y="212.0000" /> <vertex x="1642.0000" y="38.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="-1150.0000" y="146.0000" /> <vertex x="-1776.0000" y="140.0000" /> <vertex x="-1476.0000" y="-25.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="-2799.0000" y="103.0000" /> <vertex x="-2684.0000" y="223.0000" /> <vertex x="-3112.0000" y="256.0000" /> <vertex x="-3108.0000" y="98.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="3112.0000" y="255.0000" /> <vertex x="2422.0000" y="222.0000" /> <vertex x="3120.0000" y="-71.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="1142.0000" y="94.0000" /> <vertex x="1556.0000" y="221.0000" /> <vertex x="709.0000" y="226.0000" /> <vertex x="911.0000" y="93.0000" /> </polygon> <polygon numVertexes="6"> <vertex x="-2111.0000" y="89.0000" /> <vertex x="-2067.0000" y="94.0000" /> <vertex x="-2002.0000" y="139.0000" /> <vertex x="-2344.0000" y="223.0000" /> <vertex x="-2196.0000" y="112.0000" /> <vertex x="-2153.0000" y="91.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="105.0000" y="233.0000" /> <vertex x="-94.0000" y="178.0000" /> <vertex x="69.0000" y="106.0000" /> <vertex x="91.0000" y="104.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="-2002.0000" y="139.0000" /> <vertex x="-2067.0000" y="94.0000" /> <vertex x="-2032.0000" y="110.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="-1150.0000" y="146.0000" /> <vertex x="105.0000" y="233.0000" /> <vertex x="-2344.0000" y="223.0000" /> <vertex x="-2002.0000" y="139.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="413.0000" y="194.0000" /> <vertex x="356.0000" y="172.0000" /> <vertex x="376.0000" y="176.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="105.0000" y="233.0000" /> <vertex x="-192.0000" y="177.0000" /> <vertex x="-94.0000" y="178.0000" /> </polygon> <polygon numVertexes="4"> <vertex x="105.0000" y="233.0000" /> <vertex x="-1150.0000" y="146.0000" /> <vertex x="-888.0000" y="139.0000" /> <vertex x="-192.0000" y="177.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="3112.0000" y="255.0000" /> <vertex x="-3112.0000" y="256.0000" /> <vertex x="-2684.0000" y="223.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="3112.0000" y="255.0000" /> <vertex x="1556.0000" y="221.0000" /> <vertex x="1643.0000" y="212.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="709.0000" y="226.0000" /> <vertex x="173.0000" y="232.0000" /> <vertex x="476.0000" y="223.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="3112.0000" y="255.0000" /> <vertex x="2078.0000" y="222.0000" /> <vertex x="2422.0000" y="222.0000" /> </polygon> <polygon numVertexes="3"> <vertex x="3112.0000" y="255.0000" /> <vertex x="105.0000" y="233.0000" /> <vertex x="173.0000" y="232.0000" /> </polygon> </fixture> </body> </bodies> <metadata> <format>1</format> <ptm_ratio></ptm_ratio> </metadata> </bodydef> And here is my code: private void loadLevel() { // TODO Auto-generated method stub AssetManager assetManager = getAssets(); try { InputStream stream = assetManager.open("tmx/path1.xml"); if(stream != null) { try { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); dbf.setValidating(false); dbf.setIgnoringComments(false); dbf.setIgnoringElementContentWhitespace(true); dbf.setNamespaceAware(true); DocumentBuilder db = null; db = dbf.newDocumentBuilder(); Document document = db.parse(stream); Element root = document.getDocumentElement(); NodeList bodiesNodeList = root.getElementsByTagName("bodies"); for(int i = 0; i < bodiesNodeList.getLength(); i++) { BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.StaticBody; bodyDef.fixedRotation = true; Element bodiesElement = (Element)bodiesNodeList.item(i); NodeList bodyList = bodiesElement.getElementsByTagName("body"); for(int j = 0; j < bodyList.getLength(); j++) { Element bodyElement = (Element)bodyList.item(j); Body body = mPhysicsWorld.createBody(bodyDef); NodeList fixtureList = bodyElement.getElementsByTagName("fixture"); for(int k = 0; k < fixtureList.getLength(); k++) { Element fixtureElement = (Element)fixtureList.item(k); FixtureDef fixtureDef = new FixtureDef(); if(fixtureElement != null) { String density = fixtureElement.getAttribute("density"); String friction = fixtureElement.getAttribute("friction"); String restitution = fixtureElement.getAttribute("restitution"); fixtureDef = PhysicsFactory.createFixtureDef(Float.parseFloat(density), Float.parseFloat(friction), Float.parseFloat(restitution)); } NodeList polygonList = fixtureElement.getElementsByTagName("polygon"); if(polygonList != null && polygonList.getLength() > 0) { for(int m = 0; m < polygonList.getLength(); m++) { PolygonShape polyShape = new PolygonShape(); Element polygonElement = (Element)polygonList.item(m); NodeList vertexList = polygonElement.getElementsByTagName("vertex"); if(vertexList != null && vertexList.getLength() > 0) { Vector2 [] vectors = new Vector2[vertexList.getLength()]; for(int n = 0; n < vertexList.getLength(); n++) { Element vertexElement = (Element)vertexList.item(n); if(vertexElement != null) { float x = Float.parseFloat(vertexElement.getAttribute("x")); float y = Float.parseFloat(vertexElement.getAttribute("y")); vectors[n] = new Vector2(x/PIXEL_TO_METER_RATIO_DEFAULT, y/PIXEL_TO_METER_RATIO_DEFAULT); } } polyShape.set(vectors); fixtureDef.shape = polyShape; } body.createFixture(fixtureDef); } } } mScene.attachChild(bgSprite); mPhysicsWorld.registerPhysicsConnector(new PhysicsConnector(bgSprite, body, false, false)); } } } catch(Exception e) { e.printStackTrace(); } } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } Any idea where I am going wrong?

    Read the article

  • ASP.NET Client to Server communication

    - by Nelson
    Can you help me make sense of all the different ways to communicate from browser to client in ASP.NET? I have made this a community wiki so feel free to edit my post to improve it. Specifically, I'm trying to understand in which scenario to use each one by listing how each works. I'm a little fuzzy on UpdatePanel vs CallBack (with ViewState): I know UpdatePanel always returns HTML while CallBack can return JSON. Any other major differences? ...and CallBack (without ViewState) vs WebMethod. CallBack goes through most of the Page lifecycle, WebMethod doesn't. Any other major differences? IHttpHandler Custom handler for anything (page, image, etc.) Only does what you tell it to do (light server processing) Page is an implementation of IHttpHandler If you don't need what Page provides, create a custom IHttpHandler If you are using Page but overriding Render() and not generating HTML, you probably can do it with a custom IHttpHandler (e.g. writing binary data such as images) By default can use the .axd or .ashx file extensions -- both are functionally similar .ashx doesn't have any built-in endpoints, so it's preferred by convention Regular PostBack (System.Web.UI.Page : IHttpHandler) Inherits Page Full PostBack, including ViewState and HTML control values (heavy traffic) Full Page lifecycle (heavy server processing) No JavaScript required Webpage flickers/scrolls since everything is reloaded in browser Returns full page HTML (heavy traffic) UpdatePanel (Control) Control inside Page Full PostBack, including ViewState and HTML control values (heavy traffic) Full Page lifecycle (heavy server processing) Controls outside the UpdatePanel do Render(NullTextWriter) Must use ScriptManager If no client-side JavaScript, it can fall back to regular PostBack with no JavaScript (?) No flicker/scroll since it's an async call, unless it falls back to regular postback. Can be used with master pages and user controls Has built-in support for progress bar Returns HTML for controls inside UpdatePanel (medium traffic) Client CallBack (Page, System.Web.UI.ICallbackEventHandler) Inherits Page Most of Page lifecycle (heavy server processing) Takes only data you specify (light traffic) and optionally ViewState (?) (medium traffic) Client must support JavaScript and use ScriptManager No flicker/scroll since it's an async call Can be used with master pages and user controls Returns only data you specify in format you specify (e.g. JSON, XML...) (?) (light traffic) WebMethod Class implements System.Web.Service.WebService HttpContext available through this.Context Takes only data you specify (light traffic) Server only runs the called method (light server processing) Client must support JavaScript No flicker/scroll since it's an async call Can be used with master pages and user controls Returns only data you specify, typically JSON (light traffic) Can create instance of server control to render HTML and sent back as string, but events, paging in GridView, etc. won't work PageMethods Essentially a WebMethod contained in the Page class, so most of WebMethod's bullet's apply Method must be public static, therefore no Page instance accessible HttpContext available through HttpContext.Current Accessed directly by URL Page.aspx/MethodName (e.g. with XMLHttpRequest directly or with library such as jQuery) Setting ScriptManager property EnablePageMethods="True" generates a JavaScript proxy for each WebMethod Cannot be used directly in user controls with master pages and user controls Any others?

    Read the article

  • Why is iPdb not displaying STOUT after my input?

    - by BryanWheelock
    I can't figure out why ipdb is not displaying stout properly. I'm trying to debug why a test is failing and so I attempt to use ipdb debugger. For some reason my Input seems to be accepted, but the STOUT is not displayed until I (c)ontinue. Is this something broken in ipdb? It makes it very difficult to debug a program. Below is an example ipdb session, notice how I attempt to display the values of the attributes with: user.is_authenticated() user_profile.reputation user.is_superuser The results are not displayed until 'begin captured stdout ' In [13]: !python manage.py test Creating test database... < SNIP remove loading tables nosetests ...E.. /Users/Bryan/work/APP/forum/auth.py(93)can_retag_questions() 92 import ipdb; ipdb.set_trace() ---> 93 return user.is_authenticated() and ( 94 RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or user.is_authenticated() user_profile.reputation user.is_superuser c F /Users/Bryan/work/APP/forum/auth.py(93)can_retag_questions() 92 import ipdb; ipdb.set_trace() ---> 93 return user.is_authenticated() and ( 94 RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or c .....EE...... FAIL: test_can_retag_questions (APP.forum.tests.test_views.AuthorizationFunctionsTestCase) Traceback (most recent call last): File "/Users/Bryan/work/APP/../APP/forum/tests/test_views.py", line 71, in test_can_retag_questions self.assertTrue(auth.can_retag_questions(user)) AssertionError: -------------------- begin captured stdout << --------------------- ipdb True ipdb 4001 ipdb False ipdb --------------------- end captured stdout << ---------------------- Ran 20 tests in 78.032s FAILED (errors=3, failures=1) Destroying test database... In [14]: Here is the actual test I'm trying to run: def can_retag_questions(user): """Determines if a User can retag Questions.""" user_profile = user.get_profile() import ipdb; ipdb.set_trace() return user.is_authenticated() and ( RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or user.is_superuser) I've also tried to use pdb, but that doesn't display anything. I see my test progress .... , and then nothing and not responsive to keyboard input. Is this a problem with readline?

    Read the article

  • Why am I getting an error on Heroku that suggests I need to migrate my app to Bamboo?

    - by user242065
    When I type: git push heroku master, this is what happens @68-185-86-134:sample_app git push heroku master Counting objects: 110, done. Delta compression using up to 2 threads. Compressing objects: 100% (94/94), done. Writing objects: 100% (110/110), 87.48 KiB, done. Total 110 (delta 19), reused 0 (delta 0) -----> Heroku receiving push -----> Rails app detected ! This version of Rails is only supported on the Bamboo stack ! Please migrate your app to Bamboo and push again. ! See http://docs.heroku.com/bamboo for more information ! Heroku push rejected, incompatible Rails version error: hooks/pre-receive exited with error code 1 To [email protected]:blazing-frost-89.git ! [remote rejected] master -> master (pre-receive hook declined) error: failed to push some refs to '[email protected]:blazing-frost-89.git' My .gems file: rails --version 2.3.8 My .git/config file: [core] repositoryformatversion = 0 filemode = true bare = false logallrefupdates = true ignorecase = true [remote "origin"] url = [email protected]:csmeder/sample_app.git fetch = +refs/heads/*:refs/remotes/origin/* [remote "heroku"] url = [email protected]:blazing-frost-89.git fetch = +refs/heads/*:refs/remotes/heroku/*

    Read the article

  • ASP.NET ascx.cs via GET

    - by Heavy Bytes
    Say I have this url: http://site.example/dir/ In this folder I have these files: test.ascx.cs and test.ascx Just to be clear, I am not a .NET developer. From a security point of view - why can't I access http://site.example/dir/test.ascx.cs and how secure is it to keep those files there? I assume IIS filters out request that query these kind of files, but can someone explain me this? Thank you.

    Read the article

  • Using one mixin from LESS within another mixin

    - by user1165984
    I am trying to use a mix that calculates the base line height in another that contains my base font styles. Each time I try to compile, I get an error. Here is an example. .lineHeight(@sizeValue){ @remValue: @sizeValue; @pxValue: (@sizeValue * 10); line-height: ~"@{pxValue}px"; line-height: ~"@{remValue}rem"; } .baseFont(@weight: normal, @size: 14px, @lineHeight: (.lineHeight(2.1)) { font-family: @fontFamily; font-size: @size; font-weight: @weight; line-height: @lineHeight; } The error is: TypeError: Cannot call method 'charAt' of undefined at getLocation (/Applications/CodeKit.app/Contents/Resources/engines/less/lib/less/parser.js:212:34) at new LessError (/Applications/CodeKit.app/Contents/Resources/engines/less/lib/less/parser.js:221:19) at Object.toCSS (/Applications/CodeKit.app/Contents/Resources/engines/less/lib/less/parser.js:385:31) at /Applications/CodeKit.app/Contents/Resources/engines/less/bin/lessc:107:28 at /Applications/CodeKit.app/Contents/Resources/engines/less/lib/less/parser.js:434:40 at /Applications/CodeKit.app/Contents/Resources/engines/less/lib/less/parser.js:94:48 at /Applications/CodeKit.app/Contents/Resources/engines/less/lib/less/index.js:116:17 at /Applications/CodeKit.app/Contents/Resources/engines/less/lib/less/parser.js:434:40 at /Applications/CodeKit.app/Contents/Resources/engines/less/lib/less/parser.js:94:48 at /Applications/CodeKit.app/Contents/Resources/engines/less/lib/less/index.js:116:17

    Read the article

  • How can I line up columns with Perl's printf when the value might be a number or a string?

    - by user317203
    I am trying to output a document that looks like this (more at http://pastebin.com/dpBAY8Sb): 10.1.1.1 100 <unknown> <unknown> <unknown> <unknown> <unknown> 72.12.148.186 94 Canada Hamilton ON 43.250000 -79.833300 0.00 72.68.209.149 24 United States Richmond Hill NY 40.700500 -73.834500 611.32 72.192.33.34 4 United States Rocky Hill CT 41.657800 -72.662700 657.48 I cannot find how to format the output I have to have a floating poing and format the distance between columns. My current code looks something like this. if (defined $longitude){ printf FILE ("%-8s %.6f","",$longitude); }else{ $longitude = "<unknown>"; printf FILE ("%-20s ",$longitude); } The extra "" throws off the whole column and it looks like this (more at http://pastebin.com/kcwHyNwb). 10.1.1.1 100 <unknown> <unknown> <unknown> <unknown> <unknown> 72.12.148.186 94 Canada Hamilton ON 43.250000 -79.833300 0.00 72.68.209.149 24 United States Richmond Hill NY 40.700500 -73.834500 571.06

    Read the article

  • Hibernate - Problem in parsing mapping file (.hbm.xml)

    - by Yatendra Goel
    I am new to Hibernate. I have an exception while running an Hibernate-based application. The exception is as follows: 16 [main] INFO org.hibernate.cfg.Environment - Hibernate 3.3.2.GA 16 [main] INFO org.hibernate.cfg.Environment - hibernate.properties not found 16 [main] INFO org.hibernate.cfg.Environment - Bytecode provider name : javassist 31 [main] INFO org.hibernate.cfg.Environment - using JDK 1.4 java.sql.Timestamp handling 94 [main] INFO org.hibernate.cfg.Configuration - configuring from resource: /hibernate.cfg.xml 94 [main] INFO org.hibernate.cfg.Configuration - Configuration resource: /hibernate.cfg.xml 219 [main] INFO org.hibernate.cfg.Configuration - Reading mappings from resource : app/data/City.hbm.xml 266 [main] ERROR org.hibernate.util.XMLHelper - Error parsing XML: XML InputStream(12) Attribute "coloumn" must be declared for element type "property". 266 [main] ERROR org.hibernate.util.XMLHelper - Error parsing XML: XML InputStream(13) Attribute "coloumn" must be declared for element type "property". 266 [main] ERROR org.hibernate.util.XMLHelper - Error parsing XML: XML InputStream(14) Attribute "coloumn" must be declared for element type "property". It seems that it is not finding coloumn attribute of the property element in the mappings file but my mappings file do have the coloumn attribute. Below is the mappings file (City.hbm.xml) <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="app.data"> <class name="City" table="CITY"> <id column="CITY_ID" name="cityId"> <generator class="native"/> </id> <property name="cityDisplyaName" coloumn="CITY_DISPLAY_NAME" /> <property coloumn="CITY_MEANINGFUL_NAME" name="cityMeaningFulName" /> <property coloumn="CITY_URL" name="cityURL" /> </class> </hibernate-mapping>

    Read the article

  • convert bitmap to byte array

    - by narasimha
    hi 437400a8-1-40-1-32016747073700110010100-1-370670967876987810109111322151312121327202116223229343432293131364052443638493931314561454953555858583543636863566752575855-1-37067110101013121326151526553731375555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555555-1-640178010401043134021713171-1-600270103031000000000002461375-1-6004716022133244700000001234517336184919-127203465817505111366829798-111-95-79-1-600241111110000000000001234-1-6003217110222230000000001217318433506581113-95-1-380123102173170630-16-480094-43116-84-8341-30-84-72118-4865250106-2-80-97-121-2-11836-5741-108-362116-43101-966-102974345-787448-12486-1877-87316379-54-71-58112-109-116-94-98279102-97-36-767534-11711311444-57-56-82859384-100399-83-100100-68-90-114-788678118-394344-108-895754-274161-37111-53108-11938-11-2005010612210612546-764577107-107-36-1125-61-7319-78-51-109107-26-287-37-5284119-25126-26-749476-79-116115-19-107-102-7795-33-46-2402764-47-1214100-15-6-10918-105-115-127-1097069-111-123-16-50-9051-8367127-10291-65-53-78-35-18-66-36-103-6499-1099-982327-107979107-473-22-672780-23-5595-4581-4550-18-57-7598-37101-72-79-107-10885-77-39-42-92-72-114-41194622-4108-49200-29-30-8-40-8-8116-58-19114-6907-91073-6210617-101-116-10836-3882-37-122-41-97-6-120-128000-1089941-67-952339-7712337-6511-1064640000000105-11110-85-114-95-46-409042-45-24-86120-8355-10726-32-91110-5-9111-7686-33-63-54123-66-3411951024-29-29-57-11410724-74-2002764-77-127-97-107-899159112-81-9951-78-71854171-21945-92-67-478818-55102-88010000-7386116-85-45111-63-1249-2727-84-115-114-3986-99-111-19-33-120-53-24-98-4-94-9538-2-64080000-73-101-1212890-79108-1149461-18-6-677126-92-37-87-18-41108-72-31-15-65-7145-110-2484020000005949-20-12111-82-5724-53-7873-1956-8739-5-89-28852-29-61771251215652-99109-81105-38-1010261-7086-10594-97858542-278-5911146-33-10645-75-3-11843111-90-49-49-1095499-11344-78-5892-90-81-32-960-40000000090-44-79106-61-55-12-88-52-894629-111-105-8578-3-69-76-10192-8141-15-2077-5-48-86040000114-4594836411202-26-103-90102-22-7149-57-45-15-84-66-42-46645-19-5-3-118-41851012258-18-82117-393149-10090107-39-9771-89117-20-2-100-49121-685-118-68-98-5754145-4679-493031-70107-3398-10611885-103-39-73-27-35-6-105-394337-53124-73-65-43-73-289-119-33-67-33-59125105-48054-1280000011075980-53-446115-116-71-37-16-12-58-102-7373118-62622301982-35-118-962-12800015-1-39 this resultfor bytearray its not true how can implement byte array please some solution

    Read the article

  • [C#] Three System.Drawing methods manifest slow drawing or flickery: Solutions? or Other Options?

    - by Luke Mcneice
    Hi all, I am doing a little graphing via the System.Drawing and im having a few problems. I'm holding data in a Queue and i'm drawing(graphing) out that data onto three picture boxes this method fills the picture box then scrolls the graph across. so not to draw on top of the previous drawings (and graduly looking messier) i found 2 solutions to draw the graph. Call plot.Clear(BACKGOUNDCOLOR) before the draw loop [block commented] although this causes a flicker to appear from the time it takes to do the actual drawing loop. call plot.DrawLine(channelPen[5], j, 140, j, 0); just before each drawline [commented] although this causes the drawing to start ok then slow down very quickly to a crawl as if a wait command had been placed before the draw command. Here is the Code for reference: /*plotx.Clear(BACKGOUNDCOLOR) ploty.Clear(BACKGOUNDCOLOR) plotz.Clear(BACKGOUNDCOLOR)*/ for (int j = 1; j < 599; j++) { if (j > RealTimeBuffer.Count - 1) break; QueueEntity past = RealTimeBuffer.ElementAt(j - 1); QueueEntity current = RealTimeBuffer.ElementAt(j); if (j == 1) { //plotx.DrawLine(channelPen[5], 0, 140, 0, 0); //ploty.DrawLine(channelPen[5], 0, 140, 0, 0); //plotz.DrawLine(channelPen[5], 0, 140, 0, 0); } //plotx.DrawLine(channelPen[5], j, 140, j, 0); plotx.DrawLine(channelPen[0], j - 1, (((past.accdata.X - 0x7FFF) / 256) + 64), j, (((current.accdata.X - 0x7FFF) / 256) + 64)); //ploty.DrawLine(channelPen[5], j, 140, j, 0); ploty.DrawLine(channelPen[1], j - 1, (((past.accdata.Y - 0x7FFF) / 256) + 64), j, (((current.accdata.Y - 0x7FFF) / 256) + 64)); //plotz.DrawLine(markerPen, j, 140, j, 0); plotz.DrawLine(channelPen[2], j - 1, (((past.accdata.Z - 0x7FFF) / 256) + 94), j, (((current.accdata.Z - 0x7FFF) / 256) + 94)); } Is there any tricks to avoid these overheads? If not would there be any other/better solutions?

    Read the article

  • [C#] Two System.Drawing methods manifest slow drawing or flickery: Solutions? or Other Options?

    - by Luke Mcneice
    Hi all, I am doing a little graphing via the System.Drawing and im having a few problems. I'm holding data in a Queue and i'm drawing(graphing) out that data onto three picture boxes this method fills the picture box then scrolls the graph across. so not to draw on top of the previous drawings (and graduly looking messier) i found 2 solutions to draw the graph. Call plot.Clear(BACKGOUNDCOLOR) before the draw loop [block commented] although this causes a flicker to appear from the time it takes to do the actual drawing loop. call plot.DrawLine(channelPen[5], j, 140, j, 0); just before each drawline [commented] although this causes the drawing to start ok then slow down very quickly to a crawl as if a wait command had been placed before the draw command. Here is the Code for reference: /*plotx.Clear(BACKGOUNDCOLOR) ploty.Clear(BACKGOUNDCOLOR) plotz.Clear(BACKGOUNDCOLOR)*/ for (int j = 1; j < 599; j++) { if (j > RealTimeBuffer.Count - 1) break; QueueEntity past = RealTimeBuffer.ElementAt(j - 1); QueueEntity current = RealTimeBuffer.ElementAt(j); if (j == 1) { //plotx.DrawLine(channelPen[5], 0, 140, 0, 0); //ploty.DrawLine(channelPen[5], 0, 140, 0, 0); //plotz.DrawLine(channelPen[5], 0, 140, 0, 0); } //plotx.DrawLine(channelPen[5], j, 140, j, 0); plotx.DrawLine(channelPen[0], j - 1, (((past.accdata.X - 0x7FFF) / 256) + 64), j, (((current.accdata.X - 0x7FFF) / 256) + 64)); //ploty.DrawLine(channelPen[5], j, 140, j, 0); ploty.DrawLine(channelPen[1], j - 1, (((past.accdata.Y - 0x7FFF) / 256) + 64), j, (((current.accdata.Y - 0x7FFF) / 256) + 64)); //plotz.DrawLine(markerPen, j, 140, j, 0); plotz.DrawLine(channelPen[2], j - 1, (((past.accdata.Z - 0x7FFF) / 256) + 94), j, (((current.accdata.Z - 0x7FFF) / 256) + 94)); } Is there any tricks to avoid these overheads? If not would there be any other/better solutions?

    Read the article

  • JVM to ignore certificate name mismatch

    - by Heavy Bytes
    I know there were a lot of questions/answers about how to ignore SSL error in the code. On our dev region dev.domain.tld we have configured a app server over SSL. The certificate that is displayed is for somedev.domain.tld. There is no way to change the certificate, it will always be a domain mismatch. So when I deploy a web-service to https://dev.domain.tld and try to connect/call my webservice I get an exception: Caused by: java.security.cert.CertificateException: No name matching dev.domain.tld found And I have the somedev.domain.tld CERT in my trust store. Now, I saw a lot of samples how to change that in the code (using a Trust Manager that accepts all domains), but how do I specify to the JVM to ignore the domain mismatch when connecting to the server? Is there a -Djavax.net.ssl argument or something? Thank you! UPDATE: Or, since I am using Spring-WS, is there a way to set some property in Spring for that? (WebServiceTemplate) UPDATE I guess I'll have to do use something from Spring Security: http://static.springsource.org/spring-ws/sites/1.5/reference/html/security.html

    Read the article

  • Cant we use a Set or collection as a return type in GAE?

    - by user273422
    In my code i have used Set<Employees> as a return type to my function addEmp(). So, i m gettin an Compilation error. The Error is: Compiling module com.employeedepartmentgae.Employeedepartmentgae Refreshing module from source Validating newly compiled units Removing units with errors [ERROR] Errors in 'file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/GreetingServiceAsync.java' [ERROR] Line 6: The import com.employeedepartmentgae.server.domainobject.Employee cannot be resolved [ERROR] Line 18: Employee cannot be resolved to a type [ERROR] Errors in 'file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/GreetingService.java' [ERROR] Line 6: The import com.employeedepartmentgae.server.domainobject.Employee cannot be resolved [ERROR] Line 20: Employee cannot be resolved to a type [ERROR] Errors in 'file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/EmployeeWidget.java' [ERROR] Line 12: The import com.employeedepartmentgae.server.domainobject.Employee cannot be resolved [ERROR] Line 75: The method addEmp(String, String, String, AsyncCallback) from the type GreetingServiceAsync refers to the missing type Employee [ERROR] Line 75: The type new AsyncCallback(){} must implement the inherited abstract method AsyncCallback.onSuccess(Set) [ERROR] Line 75: Employee cannot be resolved to a type [ERROR] Line 94: The method onSuccess(Set) of type new AsyncCallback(){} must override or implement a supertype method [ERROR] Line 94: Employee cannot be resolved to a type [ERROR] Line 96: Employee cannot be resolved to a type [ERROR] Line 96: Employee cannot be resolved to a type [ERROR] Line 98: Employee cannot be resolved to a type Removing invalidated units [WARN] Compilation unit 'file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/Employeedepartmentgae.java' is removed due to invalid reference(s): [WARN] file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/EmployeeWidget.java [WARN] Compilation unit 'file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/DepartmentWidget.java' is removed due to invalid reference(s): [WARN] file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/GreetingService.java [WARN] file:/home/wissen18/employeedepartmentgae/src/com/employeedepartmentgae/client/GreetingServiceAsync.java Computing all possible rebind results for 'com.employeedepartmentgae.client.Employeedepartmentgae' Rebinding com.employeedepartmentgae.client.Employeedepartmentgae Checking rule [ERROR] Unable to find type 'com.employeedepartmentgae.client.Employeedepartmentgae' [ERROR] Hint: Previous compiler errors may have made this type unavailable [ERROR] Hint: Check the inheritance chain from your module; it may not be inheriting a required module or a module may not be adding its source path entries properly So please help me.....

    Read the article

  • Authentication mechanism comparison

    - by Heavy Bytes
    I have to start a new project where user authentication/management will be required. A lot of websites use existing authentication mechanisms like facebook/twitter/openID/google/etc (even SO). While I might understand that they are used to simplify some parts of this workflow can someone enumerate the pluses and minuses of using one of these authentication mechanisms vs. an usual user creation and what should I look for when I do this? Thanks in advance!

    Read the article

  • ABBYY Mobile OCR Engine for Iphone

    - by Heavy Bytes
    I am looking to use/buy a OCR solution for my next iPhone app. Searching through the answers on this site didn't really help me a lot. Did anybody ever use ABBYY Mobile OCR Engine for iPhone? What interests me is how good is it (recognition) and how much does it cost? Thank you.

    Read the article

  • Expose jar resources over web

    - by Heavy Bytes
    I have a web-service (with Spring-WS). I have a jar with several schemas (schema1.xsd, schema2.xsd and schema3.xsd) which I include in my web service. Is there a way to expose the schemas from the jar through a servlet somehow in my web-service wep app? My Spring MessageDispatcherServlet is mapped to /ws/ I would like my schemas to be exposed to /schemas/schema1.xsd /schemas/schema2.xsd and so on. I have an idea how to do it with a servlet, but it's too verbose and there has to be a nicer way. The way I am thinking is have a servlet filter and everything that hits /schemas/ check if it is in my list of allowed resources and display it. This has to be a server agnostic solution. (For instance http://tuckey.org/urlrewrite/ will not work). Thanks.

    Read the article

  • Elfsign Object Signing on Solaris

    - by danx
    Elfsign Object Signing on Solaris Don't let this happen to you—use elfsign! Solaris elfsign(1) is a command that signs and verifies ELF format executables. That includes not just executable programs (such as ls or cp), but other ELF format files including libraries (such as libnvpair.so) and kernel modules (such as autofs). Elfsign has been available since Solaris 10 and ELF format files distributed with Solaris, since Solaris 10, are signed by either Sun Microsystems or its successor, Oracle Corporation. When an ELF file is signed, elfsign adds a new section the ELF file, .SUNW_signature, that contains a RSA public key signature and other information about the signer. That is, the algorithm used, algorithm OID, signer CN/OU, and time stamp. The signature section can later be verified by elfsign or other software by matching the signature in the file agains the ELF file contents (excluding the signature). ELF executable files may also be signed by a 3rd-party or by the customer. This is useful for verifying the origin and authenticity of executable files installed on a system. The 3rd-party or customer public key certificate should be installed in /etc/certs/ to allow verification by elfsign. For currently-released versions of Solaris, only cryptographic framework plugin libraries are verified by Solaris. However, all ELF files may be verified by the elfsign command at any time. Elfsign Algorithms Elfsign signatures are created by taking a digest of the ELF section contents, then signing the digest with RSA. To verify, one takes a digest of ELF file and compares with the expected digest that's computed from the signature and RSA public key. Originally elfsign took a MD5 digest of a SHA-1 digest of the ELF file sections, then signed the resulting digest with RSA. In Solaris 11.1 then Solaris 11.1 SRU 7 (5/2013), the elfsign crypto algorithms available have been expanded to keep up with evolving cryptography. The following table shows the available elfsign algorithms: Elfsign Algorithm Solaris Release Comments elfsign sign -F rsa_md5_sha1   S10, S11.0, S11.1 Default for S10. Not recommended* elfsign sign -F rsa_sha1 S11.1 Default for S11.1. Not recommended elfsign sign -F rsa_sha256 S11.1 patch SRU7+   Recommended ___ *Most or all CAs do not accept MD5 CSRs and do not issue MD5 certs due to MD5 hash collision problems. RSA Key Length. I recommend using RSA-2048 key length with elfsign is RSA-2048 as the best balance between a long expected "life time", interoperability, and performance. RSA-2048 keys have an expected lifetime through 2030 (and probably beyond). For details, see Recommendation for Key Management: Part 1: General, NIST Publication SP 800-57 part 1 (rev. 3, 7/2012, PDF), tables 2 and 4 (pp. 64, 67). Step 1: create or obtain a key and cert The first step in using elfsign is to obtain a key and cert from a public Certificate Authority (CA), or create your own self-signed key and cert. I'll briefly explain both methods. Obtaining a Certificate from a CA To obtain a cert from a CA, such as Verisign, Thawte, or Go Daddy (to name a few random examples), you create a private key and a Certificate Signing Request (CSR) file and send it to the CA, following the instructions of the CA on their website. They send back a signed public key certificate. The public key cert, along with the private key you created is used by elfsign to sign an ELF file. The public key cert is distributed with the software and is used by elfsign to verify elfsign signatures in ELF files. You need to request a RSA "Class 3 public key certificate", which is used for servers and software signing. Elfsign uses RSA and we recommend RSA-2048 keys. The private key and CSR can be generated with openssl(1) or pktool(1) on Solaris. Here's a simple example that uses pktool to generate a private RSA_2048 key and a CSR for sending to a CA: $ pktool gencsr keystore=file format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" \ outkey=MYPRIVATEKEY.key $ openssl rsa -noout -text -in MYPRIVATEKEY.key Private-Key: (2048 bit) modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 publicExponent: 65537 (0x10001) privateExponent: 26:14:fc:49:26:bc:a3:14:ee:31:5e:6b:ac:69:83: . . . [omitted for brevity] . . . 81 prime1: 00:f6:b7:52:73:bc:26:57:26:c8:11:eb:6c:dc:cb: . . . [omitted for brevity] . . . bc:91:d0:40:d6:9d:ac:b5:69 prime2: 00:da:df:3f:56:b2:18:46:e1:89:5b:6c:f1:1a:41: . . . [omitted for brevity] . . . f3:b7:48:de:c3:d9:ce:af:af exponent1: 00:b9:a2:00:11:02:ed:9a:3f:9c:e4:16:ce:c7:67: . . . [omitted for brevity] . . . 55:50:25:70:d3:ca:b9:ab:99 exponent2: 00:c8:fc:f5:57:11:98:85:8e:9a:ea:1f:f2:8f:df: . . . [omitted for brevity] . . . 23:57:0e:4d:b2:a0:12:d2:f5 coefficient: 2f:60:21:cd:dc:52:76:67:1a:d8:75:3e:7f:b0:64: . . . [omitted for brevity] . . . 06:94:56:d8:9d:5c:8e:9b $ openssl req -noout -text -in MYCSR.p10 Certificate Request: Data: Version: 2 (0x2) Subject: OU=Canine SW object signing, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:d2:ef:42:f2:0b:8c:96:9f:45:32:fc:fe:54:94: . . . [omitted for brevity] . . . c9:c7 Exponent: 65537 (0x10001) Attributes: Signature Algorithm: sha1WithRSAEncryption b3:e8:30:5b:88:37:68:1c:26:6b:45:af:5e:de:ea:60:87:ea: . . . [omitted for brevity] . . . 06:f9:ed:b4 Secure storage of RSA private key. The private key needs to be protected if the key signing is used for production (as opposed to just testing). That is, protect the key to protect against unauthorized signatures by others. One method is to use a PIN-protected PKCS#11 keystore. The private key you generate should be stored in a secure manner, such as in a PKCS#11 keystore using pktool(1). Otherwise others can sign your signature. Other secure key storage mechanisms include a SCA-6000 crypto card, a USB thumb drive stored in a locked area, a dedicated server with restricted access, Oracle Key Manager (OKM), or some combination of these. I also recommend secure backup of the private key. Here's an example of generating a private key protected in the PKCS#11 keystore, and a CSR. $ pktool setpin # use if PIN not set yet Enter token passphrase: changeme Create new passphrase: Re-enter new passphrase: Passphrase changed. $ pktool gencsr keystore=pkcs11 label=MYPRIVATEKEY \ format=pem outcsr=MYCSR.p10 \ subject="CN=canineswworks.com,OU=Canine SW object signing" $ pktool list keystore=pkcs11 Enter PIN for Sun Software PKCS#11 softtoken: Found 1 asymmetric public keys. Key #1 - RSA public key: MYPRIVATEKEY Here's another example that uses openssl instead of pktool to generate a private key and CSR: $ openssl genrsa -out cert.key 2048 $ openssl req -new -key cert.key -out MYCSR.p10 Self-Signed Cert You can use openssl or pktool to create a private key and a self-signed public key certificate. A self-signed cert is useful for development, testing, and internal use. The private key created should be stored in a secure manner, as mentioned above. The following example creates a private key, MYSELFSIGNED.key, and a public key cert, MYSELFSIGNED.pem, using pktool and displays the contents with the openssl command. $ pktool gencert keystore=file format=pem serial=0xD06F00D lifetime=20-year \ keytype=rsa hash=sha256 outcert=MYSELFSIGNED.pem outkey=MYSELFSIGNED.key \ subject="O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com" $ pktool list keystore=file objtype=cert infile=MYSELFSIGNED.pem Found 1 certificates. 1. (X.509 certificate) Filename: MYSELFSIGNED.pem ID: c8:24:59:08:2b:ae:6e:5c:bc:26:bd:ef:0a:9c:54:de:dd:0f:60:46 Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Not Before: Oct 17 23:18:00 2013 GMT Not After: Oct 12 23:18:00 2033 GMT Serial: 0xD06F00D0 Signature Algorithm: sha256WithRSAEncryption $ openssl x509 -noout -text -in MYSELFSIGNED.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3496935632 (0xd06f00d0) Signature Algorithm: sha256WithRSAEncryption Issuer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Validity Not Before: Oct 17 23:18:00 2013 GMT Not After : Oct 12 23:18:00 2033 GMT Subject: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com Subject Public Key Info: Public Key Algorithm: rsaEncryption Public-Key: (2048 bit) Modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 Exponent: 65537 (0x10001) Signature Algorithm: sha256WithRSAEncryption 9e:39:fe:c8:44:5c:87:2c:8f:f4:24:f6:0c:9a:2f:64:84:d1: . . . [omitted for brevity] . . . 5f:78:8e:e8 $ openssl rsa -noout -text -in MYSELFSIGNED.key Private-Key: (2048 bit) modulus: 00:bb:e8:11:21:d9:4b:88:53:8b:6c:5a:7a:38:8b: . . . [omitted for brevity] . . . bf:77 publicExponent: 65537 (0x10001) privateExponent: 0a:06:0f:23:e7:1b:88:62:2c:85:d3:2d:c1:e6:6e: . . . [omitted for brevity] . . . 9c:e1:e0:0a:52:77:29:4a:75:aa:02:d8:af:53:24: c1 prime1: 00:ea:12:02:bb:5a:0f:5a:d8:a9:95:b2:ba:30:15: . . . [omitted for brevity] . . . 5b:ca:9c:7c:19:48:77:1e:5d prime2: 00:cd:82:da:84:71:1d:18:52:cb:c6:4d:74:14:be: . . . [omitted for brevity] . . . 5f:db:d5:5e:47:89:a7:ef:e3 exponent1: 32:37:62:f6:a6:bf:9c:91:d6:f0:12:c3:f7:04:e9: . . . [omitted for brevity] . . . 97:3e:33:31:89:66:64:d1 exponent2: 00:88:a2:e8:90:47:f8:75:34:8f:41:50:3b:ce:93: . . . [omitted for brevity] . . . ff:74:d4:be:f3:47:45:bd:cb coefficient: 4d:7c:09:4c:34:73:c4:26:f0:58:f5:e1:45:3c:af: . . . [omitted for brevity] . . . af:01:5f:af:ad:6a:09:bf Step 2: Sign the ELF File object By now you should have your private key, and obtained, by hook or crook, a cert (either from a CA or use one you created (a self-signed cert). The next step is to sign one or more objects with your private key and cert. Here's a simple example that creates an object file, signs, verifies, and lists the contents of the ELF signature. $ echo '#include <stdio.h>\nint main(){printf("Hello\\n");}'>hello.c $ make hello cc -o hello hello.c $ elfsign verify -v -c MYSELFSIGNED.pem -e hello elfsign: no signature found in hello. $ elfsign sign -F rsa_sha256 -v -k MYSELFSIGNED.key -c MYSELFSIGNED.pem -e hello elfsign: hello signed successfully. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. $ elfsign list -f format -e hello rsa_sha256 $ elfsign list -f signer -e hello O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com $ elfsign list -f time -e hello October 17, 2013 04:22:49 PM PDT $ elfsign verify -v -c MYSELFSIGNED.key -e hello elfsign: verification of hello failed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:22:49 PM PDT. Signing using the pkcs11 keystore To sign the ELF file using a private key in the secure pkcs11 keystore, replace "-K MYSELFSIGNED.key" in the "elfsign sign" command line with "-T MYPRIVATEKEY", where MYPRIVATKEY is the pkcs11 token label. Step 3: Install the cert and test on another system Just signing the object isn't enough. You need to copy or install the cert and the signed ELF file(s) on another system to test that the signature is OK. Your public key cert should be installed in /etc/certs. Use elfsign verify to verify the signature. Elfsign verify checks each cert in /etc/certs until it finds one that matches the elfsign signature in the file. If one isn't found, the verification fails. Here's an example: $ su Password: # rm /etc/certs/MYSELFSIGNED.key # cp MYSELFSIGNED.pem /etc/certs # exit $ elfsign verify -v hello elfsign: verification of hello passed. format: rsa_sha256. signer: O=Canine Software Works, OU=Self-signed CA, CN=canineswworks.com. signed on: October 17, 2013 04:24:20 PM PDT. After testing, package your cert along with your ELF object to allow elfsign verification after your cert and object are installed or copied. Under the Hood: elfsign verification Here's the steps taken to verify a ELF file signed with elfsign. The steps to sign the file are similar except the private key exponent is used instead of the public key exponent and the .SUNW_signature section is written to the ELF file instead of being read from the file. Generate a digest (SHA-256) of the ELF file sections. This digest uses all ELF sections loaded in memory, but excludes the ELF header, the .SUNW_signature section, and the symbol table Extract the RSA signature (RSA-2048) from the .SUNW_signature section Extract the RSA public key modulus and public key exponent (65537) from the public key cert Calculate the expected digest as follows:     signaturepublicKeyExponent % publicKeyModulus Strip the PKCS#1 padding (most significant bytes) from the above. The padding is 0x00, 0x01, 0xff, 0xff, . . ., 0xff, 0x00. If the actual digest == expected digest, the ELF file is verified (OK). Further Information elfsign(1), pktool(1), and openssl(1) man pages. "Signed Solaris 10 Binaries?" blog by Darren Moffat (2005) shows how to use elfsign. "Simple CLI based CA on Solaris" blog by Darren Moffat (2008) shows how to set up a simple CA for use with self-signed certificates. "How to Create a Certificate by Using the pktool gencert Command" System Administration Guide: Security Services (available at docs.oracle.com)

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >