Search Results

Search found 8448 results on 338 pages for 'initialization block'.

Page 94/338 | < Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >

  • Reasons for firewall alerts from ICMPv6 Local Link Address unreachable?

    - by Pulse
    For some reason I'm receiving numerous alerts, for a variety of processes, from my firewall. These are all related to ICMPv6 and are the same, apart from the process for which the alert was generated. 'Application/Process' Is trying to Access the Internet Remote Address - fe80::7191:6bd1:e5fa:58af [The Link Local Address] ICMP Type = 1 [Destination Unreachable] ICMP Code = 3 [Address Unreachable] Protocol = ICMPv6 Allow or Block If I Allow or Block, the alert never reoccurs. I understand what the various elements of these messages represent, I just can't fathom out why they are being generated. What could be the reason for these Alerts? OS - Windows 7 x86 Ultimate Thanks

    Read the article

  • How to open a server port outside of an OpenVPN tunnel with a pf firewall on OSX (BSD)

    - by Timbo
    I have a Mac mini that I use as a media server running XBMC and serves media from my NAS to my stereo and TV (which has been color calibrated with a Spyder3Express, happy). The Mac runs OSX 10.8.2 and the internet connection is tunneled for general privacy over OpenVPN through Tunnelblick. I believe my anonymous VPN provider pushes "redirect_gateway" to OpenVPN/Tunnelblick because when on it effectively tunnels all non-LAN traffic in- and outbound. As an unwanted side effect that also opens the boxes server ports unprotected to the outside world and bypasses my firewall-router (Netgear SRX5308). I have run nmap from outside the LAN on the VPN IP and the server ports on the mini are clearly visible and connectable. The mini has the following ports open: ssh/22, ARD/5900 and 8080+9090 for the XBMC iOS client Constellation. I also have Synology NAS which apart from LAN file serving over AFP and WebDAV only serves up an OpenVPN/1194 and a PPTP/1732 server. When outside of the LAN I connect to this from my laptop over OpenVPN and over PPTP from my iPhone. I only want to connect through AFP/548 from the mini to the NAS. The border firewall (SRX5308) just works excellently, stable and with a very high throughput when streaming from various VOD services. My connection is a 100/10 with a close to theoretical max throughput. The ruleset is as follows Inbound: PPTP/1723 Allow always to 10.0.0.40 (NAS/VPN server) from a restricted IP range >corresponding to possible cell provider range OpenVPN/1194 Allow always to 10.0.0.40 (NAS/VPN server) from any Outbound: Default outbound policy: Allow Always OpenVPN/1194 TCP Allow always from 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) OpenVPN/1194 UDP Allow always to 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) Block always from NAS to any On the Mini I have disabled the OSX Application Level Firewall because it throws popups which don't remember my choices from one time to another and that's annoying on a media server. Instead I run Little Snitch which controls outgoing connections nicely on an application level. I have configured the excellent OSX builtin firewall pf (from BSD) as follows pf.conf (Apple App firewall tie-ins removed) (# replaced with % to avoid formatting errors) ### macro name for external interface. eth_if = "en0" vpn_if = "tap0" ### wifi_if = "en1" ### %usb_if = "en3" ext_if = $eth_if LAN="{10.0.0.0/24}" ### General housekeeping rules ### ### Drop all blocked packets silently set block-policy drop ### all incoming traffic on external interface is normalized and fragmented ### packets are reassembled. scrub in on $ext_if all fragment reassemble scrub in on $vpn_if all fragment reassemble scrub out all ### exercise antispoofing on the external interface, but add the local ### loopback interface as an exception, to prevent services utilizing the ### local loop from being blocked accidentally. ### set skip on lo0 antispoof for $ext_if inet antispoof for $vpn_if inet ### spoofing protection for all interfaces block in quick from urpf-failed ############################# block all ### Access to the mini server over ssh/22 and remote desktop/5900 from LAN/en0 only pass in on $eth_if proto tcp from $LAN to any port {22, 5900, 8080, 9090} ### Allow all udp and icmp also, necessary for Constellation. Could be tightened. pass on $eth_if proto {udp, icmp} from $LAN to any ### Allow AFP to 10.0.0.40 (NAS) pass out on $eth_if proto tcp from any to 10.0.0.40 port 548 ### Allow OpenVPN tunnel setup over unprotected link (en0) only to VPN provider IPs ### and port ranges pass on $eth_if proto tcp from any to a.b.8.0/24 port 1194:1201 ### OpenVPN Tunnel rules. All traffic allowed out, only in to ports 4100-4110 ### Outgoing pings ok pass in on $vpn_if proto {tcp, udp} from any to any port 4100:4110 pass out on $vpn_if proto {tcp, udp, icmp} from any to any So what are my goals and what does the above setup achieve? (until you tell me otherwise :) 1) Full LAN access to the above ports on the mini/media server (including through my own VPN server) 2) All internet traffic from the mini/media server is anonymized and tunneled over VPN 3) If OpenVPN/Tunnelblick on the mini drops the connection, nothing is leaked both because of pf and the router outgoing ruleset. It can't even do a DNS lookup through the router. So what do I have to hide with all this? Nothing much really, I just got carried away trying to stop port scans through the VPN tunnel :) In any case this setup works perfectly and it is very stable. The Problem at last! I want to run a minecraft server and I installed that on a separate user account on the mini server (user=mc) to keep things partitioned. I don't want this server accessible through the anonymized VPN tunnel because there are lots more port scans and hacking attempts through that than over my regular IP and I don't trust java in general. So I added the following pf rule on the mini: ### Allow Minecraft public through user mc pass in on $eth_if proto {tcp,udp} from any to any port 24983 user mc pass out on $eth_if proto {tcp, udp} from any to any user mc And these additions on the border firewall: Inbound: Allow always TCP/UDP from any to 10.0.0.40 (NAS) Outbound: Allow always TCP port 80 from 10.0.0.40 to any (needed for online account checkups) This works fine but only when the OpenVPN/Tunnelblick tunnel is down. When up no connection is possbile to the minecraft server from outside of LAN. inside LAN is always OK. Everything else functions as intended. I believe the redirect_gateway push is close to the root of the problem, but I want to keep that specific VPN provider because of the fantastic throughput, price and service. The Solution? How can I open up the minecraft server port outside of the tunnel so it's only available over en0 not the VPN tunnel? Should I a static route? But I don't know which IPs will be connecting...stumbles How secure would to estimate this setup to be and do you have other improvements to share? I've searched extensively in the last few days to no avail...If you've read this far I bet you know the answer :)

    Read the article

  • Throwing TRIM support in Ubuntu guest at Win7-Virtualbox host

    - by user141472
    I have VirtualBox 4.1.14 on Windows 7 as host, and Ubuntu server 11.10 as guest. System was installed at traditional HDD years ago (and upgraded later), but now it's at SSD as expanding drive. There is "AHCI" and "it's SSD" features enabled in SATA controller. Problem is, this expanding drive growth to it's almost max size (90% of it), but actually in VM only about 50% spent. Also, guest VM does not recognize /dev/sda as SSD, /sys/block/sda/queue/rotational says "1", /sys/block/sda/queue/discard_* all says "0". And, of course, I cannot run fstrim /, it says that operation not supported. Is there some trick to enable TRIM support in my guest system without reinstalling it?

    Read the article

  • Do I have a bad SD card?

    - by User1
    I'm trying to copy data from my computer to an SD card. After a few hundred megs, I keep getting the following errors in dmesg: [34542.836192] end_request: I/O error, dev mmcblk0, sector 855936 [34542.836284] FAT: unable to read inode block for updating (i_pos 13694981) [34542.836306] MMC: killing requests for dead queue [34542.836310] end_request: I/O error, dev mmcblk0, sector 9280 [34542.837035] FAT: unable to read inode block for updating (i_pos 148486) [34542.837062] MMC: killing requests for dead queue [34542.837066] end_request: I/O error, dev mmcblk0, sector 1 [34542.837074] FAT: bread failed in fat_clusters_flush [34542.837085] MMC: killing requests for dead queue These were all files I copied from a smaller SD card. I just want to transfer them to my new, larger card for my phone. I tried the same experiment with different files on a different machine and the card failed again. Reading data from the old card went fine. My systems are older and the new SD card is new (16GB Class 4). Could this be that my computers are too old? Is there a definitive test to verify if my SD card is bad?

    Read the article

  • Does CHECK TABLE add read/write locks?

    - by Ztyx
    Hi, Yesterday I ran CHECK TABLE on a table that is read very frequently. I scanned the MySQL documentation for CHECK TABLE for any mentions of "lock" (and found none) and also noticed that only SELECT privilege was required to run the command. I therefor concluded that the command did not do any read lock and was safe to run even in production. Sadly, running the command took 1 minute and 37 seconds and seemed to block all read access. My question is therefor, does CHECK TABLE do any read lock? Any other reason why I experienced a read block on the table? Thanks

    Read the article

  • Does anyone know of a inexpensive NAT router that has the ability to limit access to the Internet to

    - by Corey
    Does anyone know of a inexpensive NAT router that has the ability to limit access to the Internet to a specific MAC address? I know the Linksys routers have a MAC filtering feature, but it is the opposite of what I need. It allows you to block access to a specific MAC address. I need something that will block all, but allow an exception. I'm dealing with some VOIP issues in my company's network, and I think the answer is to have a separate router on the network for my PBX to use. I want to make sure that other nodes are not allowed to access the Internet via this second router.

    Read the article

  • Filtering downloading a file

    - by Ozgun Sunal
    people. i know there are several types of firewalls operating at different layers of OSI. ACLs(layer 3 firewalls filter based on port numbers and IP addresses), SPI(which examines the patterns of data at layer 3 and realise that data content is malicious or not) and application layer firewalls which is capable of understanding the data at that level. Considering this, i'll give an example and learn what i need to do. Lets say, we have a computer has access to the Internet. i want to download a file or display a web page from a website but block access to the another website/s or downloading. To do this, i cant block access to the web browser on the 3rd party firewall bcos that will shut down all access. ACLs wont already do it. So, which kind of firewall will make it possible to filter specific traffic and how?

    Read the article

  • Proper way to configure ~/.Xsession with a standalone window manager to gracefully end a session

    - by cYrus
    I'm using xdm and my ~/.Xsession looks like this: # <initialization stuff here> exec openbox It works, but I've noticed that when I log out Openbox doesn't gracefully kill all the applications. In particular Google Chrome complains about that. How can I make sure to wait for all processes to exit (just like others configurations: Gnome, KDE, Windows ...)? The only (ugly) solution that I've found involves sleep and kill into ~/.Xsession.

    Read the article

  • how to fight back attacks on my web service

    - by user12145
    My apache webservice is getting a large quantity of requests over the days, each one with somewhat random login to gain access. I identified about 60 such ips(a few sample below), all belong to google. is there a way to find more information about the origin of the attacker? or should I just block these ips. secondly, should I attempt to block the identified ips subnets(74.125.46.*) as a preventive measure? 72.14.194.65 64.233.172.20 74.125.75.19 72.14.194.33 74.125.46.87 74.125.44.91 74.125.46.91

    Read the article

  • How can I keep persistent cookies from certain domains only?

    - by Mike L.
    This question is similar this one which covers Firefox, but I want to know how to do it in Chrome: I want Chrome to clear cookies from all sites accept those from certain domains. In the Cookies section of the *Content Settings I've made following selections: (*) Allow local data to be set (recommended) ( ) Allow local data to be set for the current session only ( ) Block sites from setting any data [ ] Block third-party cookies and site data [x] Clear cookies and other sites and plug-in data when I close my browser After logged in to my preferred website(s), I find the required domains listed when I click at All cookies and site data. Let's say, I find some cookies for mysite.comand www.mysite.com. Now I click at Manage exceptions and enter these items: Hostname Pattern Behavior ------------------------------------------- mysite.com Allow www.mysite.com Allow Unfortunately, this does not seem to work, because when I close Chrome and reopen it, all cookies are gone, even those from the configured mysite.com hosts.

    Read the article

  • Sublinear Extra Space MergeSort

    - by hulkmeister
    I am reviewing basic algorithms from a book called Algorithms by Robert Sedgewick, and I came across a problem in MergeSort that I am, sad to say, having difficulty solving. The problem is below: Sublinear Extra Space. Develop a merge implementation that reduces that extra space requirement to max(M, N/M), based on the following idea: Divide the array into N/M blocks of size M (for simplicity in this description, assume that N is a multiple of M). Then, (i) considering the blocks as items with their first key as the sort key, sort them using selection sort; and (ii) run through the array merging the first block with the second, then the second block with the third, and so forth. The problem I have with the problem is that based on the idea Sedgewick recommends, the following set of arrays will not be sorted: {0, 10, 12}, {3, 9, 11}, {5, 8, 13}. The algorithm I use is the following: Divide the full array into subarrays of size M. Run Selection Sort on each of the subarrays. Merge each of the subarrays using the method Sedgwick recommends in (ii). (This is where I encounter the problem of where to store the results after the merge.) This leads to wanting to increase the size of the auxiliary space needed to handle at least two subarrays at a time (for merging), but based on the specifications of the problem, that is not allowed. I have also considered using the original array as space for one subarray and using the auxiliary space for the second subarray. However, I can't envision a solution that does not end up overwriting the entries of the first subarray. Any ideas on other ways this can be done? NOTE: If this is suppose to be on StackOverflow.com, please let me know how I can move it. I posted here because the question was academic.

    Read the article

  • Latest Chrome Canary Channel Build Adds Automatic ‘Malware Download’ Blocking Feature

    - by Akemi Iwaya
    As Chrome’s popularity continues to grow, malware authors are looking for new ways to target and trick users of Google’s browser into downloading malicious software to their computers. With this problem in mind, Google has introduced a new feature into the Canary Channel to automatically detect and block malware downloads whenever possible in order to help keep your system intact and safe. Screenshot courtesy of The Google Chrome Blog. In addition to the recent Reset Feature added to the stable build of Chrome this past August, the new feature in the Canary Channel build works to help protect you as follows: From the Google Chrome Blog post: In the current Canary build of Chrome, we’ll automatically block downloads of malware that we detect. If you see this message in the download tray at the bottom of your screen, you can click “Dismiss” knowing Chrome is working to keep you safe. (See screenshot above.) You can learn more about the new feature and download the latest Canary Channel build via the links below. Don’t mess with my browser! [Google Chrome Blog] Download the Latest Chrome Canary Build [Google] [via The Next Web]     

    Read the article

  • Collision detection with multiple polygons simultaneously

    - by Craig Innes
    I've written a collision system which detects/resolves collisions between a rectangular player and a convex polygon world using the Separating Axis Theorem. This scheme works fine when the player is colliding with a single polygon, but when I try to create a level made up of combinations of these shapes, the player gets "stuck" between shapes when trying to move from one polygon to the other. The reason for this seems to be that collisions are detected after the player has been pushed through the shape by its movement or gravity. When the system resolves the collision, it resolves them in an order that doesn't make sense (for example, when the player is moving from one flat rectangle to another, gravity pushes them below the ground, but the collision with the left hand side of the second block is resolved before the collision with the top of the block, meaning the player is pushed back left before being pushed back up). Other similar posts have resolved this problem by having a strict rule on which axes to resolve first. For example, always resolve the collision on the y axis, then if the object is still colliding with things, resolve on the x axis. This solution only works in the case of a completely axis oriented box world, and doesn't solve the problem if the player is stuck moving along a series of angled shapes or sliding down a wall. Does any one have any ideas of how I could alter my collision system to prevent these situations from happening?

    Read the article

  • Optimizing Solaris 11 SHA-1 on Intel Processors

    - by danx
    SHA-1 is a "hash" or "digest" operation that produces a 160 bit (20 byte) checksum value on arbitrary data, such as a file. It is intended to uniquely identify text and to verify it hasn't been modified. Max Locktyukhin and others at Intel have improved the performance of the SHA-1 digest algorithm using multiple techniques. This code has been incorporated into Solaris 11 and is available in the Solaris Crypto Framework via the libmd(3LIB), the industry-standard libpkcs11(3LIB) library, and Solaris kernel module sha1. The optimized code is used automatically on systems with a x86 CPU supporting SSSE3 (Intel Supplemental SSSE3). Intel microprocessor architectures that support SSSE3 include Nehalem, Westmere, Sandy Bridge microprocessor families. Further optimizations are available for microprocessors that support AVX (such as Sandy Bridge). Although SHA-1 is considered obsolete because of weaknesses found in the SHA-1 algorithm—NIST recommends using at least SHA-256, SHA-1 is still widely used and will be with us for awhile more. Collisions (the same SHA-1 result for two different inputs) can be found with moderate effort. SHA-1 is used heavily though in SSL/TLS, for example. And SHA-1 is stronger than the older MD5 digest algorithm, another digest option defined in SSL/TLS. Optimizations Review SHA-1 operates by reading an arbitrary amount of data. The data is read in 512 bit (64 byte) blocks (the last block is padded in a specific way to ensure it's a full 64 bytes). Each 64 byte block has 80 "rounds" of calculations (consisting of a mixture of "ROTATE-LEFT", "AND", and "XOR") applied to the block. Each round produces a 32-bit intermediate result, called W[i]. Here's what each round operates: The first 16 rounds, rounds 0 to 15, read the 512 bit block 32 bits at-a-time. These 32 bits is used as input to the round. The remaining rounds, rounds 16 to 79, use the results from the previous rounds as input. Specifically for round i it XORs the results of rounds i-3, i-8, i-14, and i-16 and rotates the result left 1 bit. The remaining calculations for the round is a series of AND, XOR, and ROTATE-LEFT operators on the 32-bit input and some constants. The 32-bit result is saved as W[i] for round i. The 32-bit result of the final round, W[79], is the SHA-1 checksum. Optimization: Vectorization The first 16 rounds can be vectorized (computed in parallel) because they don't depend on the output of a previous round. As for the remaining rounds, because of step 2 above, computing round i depends on the results of round i-3, W[i-3], one can vectorize 3 rounds at-a-time. Max Locktyukhin found through simple factoring, explained in detail in his article referenced below, that the dependencies of round i on the results of rounds i-3, i-8, i-14, and i-16 can be replaced instead with dependencies on the results of rounds i-6, i-16, i-28, and i-32. That is, instead of initializing intermediate result W[i] with: W[i] = (W[i-3] XOR W[i-8] XOR W[i-14] XOR W[i-16]) ROTATE-LEFT 1 Initialize W[i] as follows: W[i] = (W[i-6] XOR W[i-16] XOR W[i-28] XOR W[i-32]) ROTATE-LEFT 2 That means that 6 rounds could be vectorized at once, with no additional calculations, instead of just 3! This optimization is independent of Intel or any other microprocessor architecture, although the microprocessor has to support vectorization to use it, and exploits one of the weaknesses of SHA-1. Optimization: SSSE3 Intel SSSE3 makes use of 16 %xmm registers, each 128 bits wide. The 4 32-bit inputs to a round, W[i-6], W[i-16], W[i-28], W[i-32], all fit in one %xmm register. The following code snippet, from Max Locktyukhin's article, converted to ATT assembly syntax, computes 4 rounds in parallel with just a dozen or so SSSE3 instructions: movdqa W_minus_04, W_TMP pxor W_minus_28, W // W equals W[i-32:i-29] before XOR // W = W[i-32:i-29] ^ W[i-28:i-25] palignr $8, W_minus_08, W_TMP // W_TMP = W[i-6:i-3], combined from // W[i-4:i-1] and W[i-8:i-5] vectors pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) movdqa W, W_TMP // 4 dwords in W are rotated left by 2 psrld $30, W // rotate left by 2 W = (W >> 30) | (W << 2) pslld $2, W_TMP por W, W_TMP movdqa W_TMP, W // four new W values W[i:i+3] are now calculated paddd (K_XMM), W_TMP // adding 4 current round's values of K movdqa W_TMP, (WK(i)) // storing for downstream GPR instructions to read A window of the 32 previous results, W[i-1] to W[i-32] is saved in memory on the stack. This is best illustrated with a chart. Without vectorization, computing the rounds is like this (each "R" represents 1 round of SHA-1 computation): RRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRRR With vectorization, 4 rounds can be computed in parallel: RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR RRRRRRRRRRRRRRRRRRRR Optimization: AVX The new "Sandy Bridge" microprocessor architecture, which supports AVX, allows another interesting optimization. SSSE3 instructions have two operands, a input and an output. AVX allows three operands, two inputs and an output. In many cases two SSSE3 instructions can be combined into one AVX instruction. The difference is best illustrated with an example. Consider these two instructions from the snippet above: pxor W_minus_16, W // W = (W[i-32:i-29] ^ W[i-28:i-25]) ^ W[i-16:i-13] pxor W_TMP, W // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) With AVX they can be combined in one instruction: vpxor W_minus_16, W, W_TMP // W = (W[i-32:i-29] ^ W[i-28:i-25] ^ W[i-16:i-13]) ^ W[i-6:i-3]) This optimization is also in Solaris, although Sandy Bridge-based systems aren't widely available yet. As an exercise for the reader, AVX also has 256-bit media registers, %ymm0 - %ymm15 (a superset of 128-bit %xmm0 - %xmm15). Can %ymm registers be used to parallelize the code even more? Optimization: Solaris-specific In addition to using the Intel code described above, I performed other minor optimizations to the Solaris SHA-1 code: Increased the digest(1) and mac(1) command's buffer size from 4K to 64K, as previously done for decrypt(1) and encrypt(1). This size is well suited for ZFS file systems, but helps for other file systems as well. Optimized encode functions, which byte swap the input and output data, to copy/byte-swap 4 or 8 bytes at-a-time instead of 1 byte-at-a-time. Enhanced the Solaris mdb(1) and kmdb(1) debuggers to display all 16 %xmm and %ymm registers (mdb "$x" command). Previously they only displayed the first 8 that are available in 32-bit mode. Can't optimize if you can't debug :-). Changed the SHA-1 code to allow processing in "chunks" greater than 2 Gigabytes (64-bits) Performance I measured performance on a Sun Ultra 27 (which has a Nehalem-class Xeon 5500 Intel W3570 microprocessor @3.2GHz). Turbo mode is disabled for consistent performance measurement. Graphs are better than words and numbers, so here they are: The first graph shows the Solaris digest(1) command before and after the optimizations discussed here, contained in libmd(3LIB). I ran the digest command on a half GByte file in swapfs (/tmp) and execution time decreased from 1.35 seconds to 0.98 seconds. The second graph shows the the results of an internal microbenchmark that uses the Solaris libpkcs11(3LIB) library. The operations are on a 128 byte buffer with 10,000 iterations. The results show operations increased from 320,000 to 416,000 operations per second. Finally the third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. third graph shows the results of an internal kernel microbenchmark that uses the Solaris /kernel/crypto/amd64/sha1 module. The operations are on a 64Kbyte buffer with 100 iterations. The results show for 1 kernel thread, operations increased from 410 to 600 MBytes/second. For 8 kernel threads, operations increase from 1540 to 1940 MBytes/second. Availability This code is in Solaris 11 FCS. It is available in the 64-bit libmd(3LIB) library for 64-bit programs and is in the Solaris kernel. You must be running hardware that supports Intel's SSSE3 instructions (for example, Intel Nehalem, Westmere, or Sandy Bridge microprocessor architectures). The easiest way to determine if SSSE3 is available is with the isainfo(1) command. For example, nehalem $ isainfo -v $ isainfo -v 64-bit amd64 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu If the output also shows "avx", the Solaris executes the even-more optimized 3-operand AVX instructions for SHA-1 mentioned above: sandybridge $ isainfo -v 64-bit amd64 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications avx xsave pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this code. Solaris libraries and kernel automatically determine if it's running on SSSE3 or AVX-capable machines and execute the correctly-tuned code for that microprocessor. Summary The Solaris 11 Crypto Framework, via the sha1 kernel module and libmd(3LIB) and libpkcs11(3LIB) libraries, incorporated a useful SHA-1 optimization from Intel for SSSE3-capable microprocessors. As with other Solaris optimizations, they come automatically "under the hood" with the current Solaris release. References "Improving the Performance of the Secure Hash Algorithm (SHA-1)" by Max Locktyukhin (Intel, March 2010). The source for these SHA-1 optimizations used in Solaris "SHA-1", Wikipedia Good overview of SHA-1 FIPS 180-1 SHA-1 standard (FIPS, 1995) NIST Comments on Cryptanalytic Attacks on SHA-1 (2005, revised 2006)

    Read the article

  • Abstract exception super type

    - by marcof
    If throwing System.Exception is considered so bad, why wasn't Exception made abstract in the first place? That way, it would not be possible to call: throw new Exception("Error occurred."); This would enforce using derived exceptions to provide more details about the error that occurred. For example, when I want to provide a custom exception hierarchy for a library, I usually declare an abstract base class for my exceptions: public abstract class CustomExceptionBase : Exception { /* some stuff here */ } And then some derived exception with a more specific purpose: public class DerivedCustomException : CustomExceptionBase { /* some more specific stuff here */ } Then when calling any library method, one could have this generic try/catch block to directly catch any error coming from the library: try { /* library calls here */ } catch (CustomExceptionBase ex) { /* exception handling */ } Is this a good practice? Would it be good if Exception was made abstract? EDIT : My point here is that even if an exception class is abstract, you can still catch it in a catch-all block. Making it abstract is only a way to forbid programmers to throw a "super-wide" exception. Usually, when you voluntarily throw an exception, you should know what type it is and why it happened. Thus enforcing to throw a more specific exception type.

    Read the article

  • Set up a GUI managed stateful filtering firewall?

    - by Azendale
    What ways are there of setting up a stateful filtering* firewall whose rules can be managed by a GUI? Can GUFW do it? FireStarter? (or should that be avoided because it is supposedly no longer updated?) *By filtering, I'm mean the traffic I am setting rules up for is not destined for this computer. It is either from or to other computers on my LAN. Say, for (a simplified, hypothetical) example: I have an ethernet connection at my dorm that I have plugged into eth0. It gets an address of 192.168.1.185 and I also have 192.168.185.0/24 routed to me, so I don't have to do any NAT. I have a hub attached to my second ethernet port (eth1) with a few Windows computers and I give addresses out of my 192.168.185.0/24 block with DHCP. How can I use my Ubuntu box to block incoming connections from eth0 that are being routed to my Windows computers and let through just a few specific ports (so fellow students can't see what files my Windows boxes are sharing via SMB)?

    Read the article

  • Testing for Auto Save and Load Game

    - by David Dimalanta
    I'm trying to make a simple app that will test the save and load state. Is it a good idea to make an app that has an auto save and load game every time the newbies open the first app then continues it on the other day? I'm trying to make a simple app with a simple moving block sprite, starting at the center coordinate. Once I moved the sprite to the top by touch n' drag, I touch the back key to close the app. I expected that once I re-open the app and the block sprite is still at the top. But instead, it goes back to the center instead. Where can I find more ways to use the preferences or manipulating by telling the dispose method to dispose only specific wastes but not the important records (fastest time, last time where the sprite is located via coordinates, etc.). Is there really an easy way or it has no shortcuts but most effective way? I need help to expand more ideas. Thank you. Here are the following links that I'm trying to figure it out how to use them: http://www.youtube.com/watch?v=gER5GGQYzGc http://www.badlogicgames.com/wordpress/?p=1585 http://www.youtube.com/watch?v=t0PtLexfBCA&feature=relmfu Take note that these links above are codes. But I'm not looking answers for code but to look how to start or figure it out how to use them. Tell me if I'm wrong.

    Read the article

  • C# with keyword equivalent

    - by oazabir
    There’s no with keyword in C#, like Visual Basic. So you end up writing code like this: this.StatusProgressBar.IsIndeterminate = false; this.StatusProgressBar.Visibility = Visibility.Visible; this.StatusProgressBar.Minimum = 0; this.StatusProgressBar.Maximum = 100; this.StatusProgressBar.Value = percentage; Here’s a work around to this: With.A<ProgressBar>(this.StatusProgressBar, (p) => { p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; }); Saves you repeatedly typing the same class instance or control name over and over again. It also makes code more readable since it clearly says that you are working with a progress bar control within the block. It you are setting properties of several controls one after another, it’s easier to read such code this way since you will have dedicated block for each control. It’s a very simple one line function that does it: public static class With { public static void A<T>(T item, Action<T> work) { work(item); } } You could argue that you can just do this: var p = this.StatusProgressBar; p.IsIndeterminate = false; p.Visibility = Visibility.Visible; p.Minimum = 0; p.Maximum = 100; p.Value = percentage; But it’s not elegant. You are introducing a variable “p” in the local scope of the whole function. This goes against naming conventions. Morever, you can’t limit the scope of “p” within a certain place in the function.

    Read the article

  • Fix corrupt NTFS partition without Windows

    - by Capt.Nemo
    MY NTFS Partition has gotten corrupt somehow (it's a relic from the days when I had Windows installed). I'm putting the debug output of fdisk and blkid here. At the same time, any OS is unable to mount my root partition, which is located next to my NTFS partition. I'm not sure if this has anything to do with it, though. I get the following error while trying to mount my root partition (sda5) mount: wrong fs type, bad option, bad superblock on /dev/sda5, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so ubuntu@ubuntu:~$ dmesg | tail [ 1019.726530] Descriptor sense data with sense descriptors (in hex): [ 1019.726533] 72 03 11 04 00 00 00 0c 00 0a 80 00 00 00 00 00 [ 1019.726551] 1a 3e ed 92 [ 1019.726558] sd 0:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed [ 1019.726568] sd 0:0:0:0: [sda] CDB: Read(10): 28 00 1a 3e ed 40 00 01 00 00 [ 1019.726584] end_request: I/O error, dev sda, sector 440331666 [ 1019.726602] JBD: Failed to read block at offset 462 [ 1019.726609] ata1: EH complete [ 1019.726612] JBD: recovery failed [ 1019.726617] EXT4-fs (sda5): error loading journal When I open gparted (using live CD), I get an exclamation next to my NTFS drive which states Is there a way to run chkdsk without using windows ? My attempt to run fsck results in the following : ubuntu@ubuntu:~$ sudo fsck /dev/sda fsck from util-linux-ng 2.17.2 e2fsck 1.41.14 (22-Dec-2010) fsck.ext2: Superblock invalid, trying backup blocks... fsck.ext2: Bad magic number in super-block while trying to open /dev/sda The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Update : I was able to fix the NTFS partition running chkdsk off HBCD, but it seems that the superblock problem still remains. *Update 2: * Fixed superblock issue using e2fsck -c /dev/sda5

    Read the article

  • Multi MVC processing vs Single MVC process

    - by lordg
    I've worked fairly extensively with the MVC framework cakephp, however I'm finding that I would rather have my pages driven by the multiple MVC than by just one MVC. My reason is primarily to maintain an a more DRY principle. In CakePHP MVC: you call a URL which calls a single MVC, which then calls the layout. What I want is: you call a URL, it processes a layout, which then calls multiple MVC's per component/block of html on the page. When you compare JavaScript components, AJAX, and server side HTML rendering, it seems the most consistent method for building pages is through blocks of components or HTML views. That way, the view block could be situated either on the server or the client. This is technically my ONLY disagreement with the MVC model. Outside of this, IMHO MVC rocks! My question is: What other RAD frameworks follow the same principles as MVC but are driven rather by the View side of MVC? I've looked at Django and Ruby on Rails, yet they seems to be more Controller driven. Lift/Scala appears to be somewhat of a good fit, but i'm interested to see what others exist.

    Read the article

  • Tips for debugging Samba performance?

    - by j-g-faustus
    Samba gives me 24 MB/s read and 44 MB/s write, while ftp gives 97 and 112 MB/s under the same circumstances. The documentation says that Generally, you should find that Samba performs similarly to ftp at raw transfer speed. In my case it clearly doesn't. Where can I find tips on how to debug Samba performance? Or alternatively tips for replacing Samba with something else? (I can't use ftp, unfortunately, as I need something that can be used with rsync/rsnapshot.) More details: Both computers are running Ubuntu 10.10 (using Samba because I have a Mac as well) The Samba share is on a local home network, mounted as $ mount ... //server.local/share/ on /mnt/share type cifs (rw,mand) Samba performance was tested by copying (cp) a single file of ~4GB to and from the share, using time for timing and calculating transfer speed by hand. ftp performance are the numbers from the ftp client for get/put of the same file. iperf gives network speed ~900 Mbits/s bonnie++ gives disk speeds 200 MB/s on both sides for block reads as well as block writes Tried changing the parameters suggested in the performance tuning HOWTO (read/write raw, read size, socket options), most of them made little to no difference. (The one that made a difference caused write speed to drop 50%.)

    Read the article

  • Checkers AI Algorithm

    - by John
    I am making an AI for my checkers game and I'm trying to make it as hard as possible. Here is the current criteria for a move on the hardest difficulty: 1: Look For A Block: This is when a piece is being threatened and another piece can be moved in behind it to protect it. Here is an example: Black Moves |W| |W| |W| |W| | | |W| |W| |W| |W| |W| | | |W| |W| | | | | |W| | | | | | | | | |B| | | | | |B| | | |B| |B| |B| |B| |B| |B| | | |B| |B| |B| |B| White Blocks |W| |W| |W| |W| | | |W| | | |W| |W| |W| |W| |W| |W| | | | | |W| | | | | | | | | |B| | | | | |B| | | |B| |B| |B| |B| |B| |B| | | |B| |B| |B| |B| 2: Move pieces out of danger: if any piece is being threatened, and a piece cannot block for that piece, then it will attempt to move out of the way. If the piece cannot move out of the way without still being in danger, the computer ignores the piece. 3: If the computer player owns any kings, it will attempt to 'hunt down' enemy pieces on the board, if no moves can be made that won't in danger the king or any other pieces, the computer ignores this rule. 4: Any piece that is owned by the computer that is in column 1 or 6 will attempt to go to a side. When a piece is in column 0 or 7, it is in a very strategic position because it cannot get captured while it is in either of these columns 5: It makes an educated random move, the move will not indanger the piece that is moving or any piece that is on the board. 6: If none of the above are possible it makes a random move. This question is not really specific to any language but if all examples could be in Java that would be great, considering this app is written in android. Does anyone see any room for improvement in this algorithm? Anything that would make it better at playing checkers?

    Read the article

  • Using JuJu with private Openstack cloud deployment?

    - by user76054
    I'm seeing a number of problems trying to use JuJu with our internally deployed Openstack cloud. Most of this appears to be centered around DNS host resolution as well as the need to deal with our company's internal HTTP proxies. Our Openstack deployment relies upon an unroutable 172.16.0.0/12 block of addresses for VLAN allocation to each project (tenant) hosted on our internal cloud. User's have the option of assigning one or more floating addresses to instances, allocated from a block of routable addresses on our internal companies LAN. Currently, Openstack doesn't register instance names with anything other than the DNSMASQ service running on the cloud controller. As such, there's no way to resolve this address through our internal DNS hierarchy (this issue has already been reported as Bug #945505). As such, even though I can bootstrap my JuJu server node, I can't connect to it with the JuJu client, since it can't resolve the local (private) network name. I am able to ssh to the node, once I've assigned it an internally routable (i.e. floating) address. Which leads to the next issue. Next, to install software on an instance running in our cloud, it must have our internal proxy address defined - either in the apt.conf file or via environment variables. Unfortunately, when bootstrapping the server node, there's no provision to pass this info into a instance via JuJu environment.yaml file (if this is even the best way to handle this issue). As a result, the bootstrap node is unable to install the required packages. I'm assuming (dangerous, I know) that the way that I've deployed Openstack in our internal environment is probably not unique. Has anyone else encountered these issues? And more importantly, are work arounds available? Regards, Ross

    Read the article

  • How to teach Exception Handling for New Programmers?

    - by Kanini
    How do you go about teaching Exception Handling to Programmers. All other things are taught easily - Data Structures, ASP.NET, WinForms, WPF, WCF - you name it, everything can be taught easily. With Exception Handling, teaching them try-catch-finally is just the syntactic nature of Exception Handling. What should be taught however is - What part of your code do you put in the try block? What do you do in the catch block? Let me illustrate it with an example. You are working on a Windows Forms Project (a small utility) and you have designed it as below with 3 different projects. UILayer BusinessLayer DataLayer If an Exception (let us say of loading an XDocument throws an exception) is raised at DataLayer (the UILayer calls BusinessLayer which in turns calls the DataLayer), do you just do the following //In DataLayer try { XDocument xd_XmlDocument = XDocument.Load("systems.xml"); } catch(Exception ex) { throw ex; } which gets thrown again in the BusinessLayer and which is caught in UILayer where I write it to the log file? Is this how you go about Exception Handling?

    Read the article

  • Adding Vertices to a dynamic mesh via Method Call

    - by Raven Dreamer
    I have a C# Struct with a static method, "Get Shape" which populates a List with the vertices of a polyhedron. Method Signature: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) Adding directly to the vertices list (via vertices.Add(vector3) ), the code works as expected, and the new polyhedron appears when I trigger the method. However, I want to do some processing on the vertices I'm adding (a rotation), and the most sensible way I can think to do that is by creating a separate list of Vector3s, and then combining the lists when I'm done. However, vertices.AddRange(newVerts) does not add the shape to the mesh, nor does a foreach loop with verts.Add(vertices[i]). And this is before I've added in any of the processing! I have a feeling this might stem from passing the list of vertices in as a parameter, rather than returning a list and then adding to the vertices in the calling object, but since I'm filling 4 lists, I was trying to avoid having to create a data struct to return all four at once. Any ideas? The working version of the method is reprinted below, in full: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) { //List<Vector3> vertices = new List<Vector3>(); int l_blockShape = b.blockShape; int l_blockType = b.blockType; //CheckFace checks if the block is empty //if this block is empty, don't draw anything. int vertexIndex; //only y faces need to be hidden. //if((l_blockShape & BlockShape.NegZFace) == BlockShape.NegZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //XY Z+1 face //if((l_blockShape & BlockShape.PosZFace) == BlockShape.PosZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY face //if((l_blockShape & BlockShape.NegXFace) == BlockShape.NegXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY X+1 face // if((l_blockShape & BlockShape.PosXFace) == BlockShape.PosXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX face if((l_blockShape & BlockShape.NegYFace) == BlockShape.NegYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX + 1 face if((l_blockShape & BlockShape.PosYFace) == BlockShape.PosYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y+1 , z+.2f)); vertices.Add(new Vector3(x+.8f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.2f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } }

    Read the article

< Previous Page | 90 91 92 93 94 95 96 97 98 99 100 101  | Next Page >