Search Results

Search found 20447 results on 818 pages for 'f5 big ip'.

Page 5/818 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Issue with setting up multiple IP addresses on ubuntu server installation

    - by varunyellina
    I want to setup two ip addresses on my system for access through lan. This is my config on my other system. Desktop Installation My desktop installation runs with multiple IP's added through networkmanager both through lan and wifi. Server Installation On my server install I've edited /etc/network/interfaces to the following. auto eth0 auto eth0:1 # IP-1 iface eth0 inet static address 172.16.35.35 network 172.16.34.1 netmask 255.255.254.0 broadcast 172.166.35.255 dns-nameservers 172.16.100.221 8.8.8.8 # IP-2 iface eth0:1 inet static address 172.16.34.34 network 172.16.34.1 netmask 255.255.254.0 gateway 172.16.34.1 broadcast 172.16.35.255 After restarting through "/etc/init.d/networking restart" I recieve "Failed to bring up eth0:1" What am I doing wrong? Thankyou.

    Read the article

  • Is it unwise to blacklist an IP address?

    - by hawbsl
    We have a form on a commercial website which has been abused (but only once or twice) by someone from a particular IP address. A colleague wants to blacklist that IP address from the website. Seems to me that's overkill, and that there's a risk that genuine customers sharing that same IP address would be blacklisted too. I suppose a big part of my question is how many people might be sharing that same IP address and could be affected by our blacklist. I suspect that's a "how long's a piece of string" question but some ballpark answer would be really helpful. We're in the UK if that's significant.

    Read the article

  • Ubuntu 12.04 x64 LTS VPN Server not changing IP

    - by user288778
    I used this guide http://silverlinux.blogspot.co.uk/2012/05/how-to-pptp-vpn-on-ubuntu-1204-pptpd.html and it worked fine. I'm able to connect but the problem is, that my IP being changed to "localip" not "remote ip". This is what I get from tail -f /var/log/syslog [code] June 6 00:09:19 instant5860 NetworkManager[1456]: Unmanaged Device found; state CONNECTED forced (see http://bugs.launchpad.net/bugs/191889) June 6 00:09:19 instant5860 NetworkManager[1456]: Marking connection 'Wired connection 1' invalid. June 6 00:09:19 instant5860 NetworkManager[1456]: Activation (eth1) failed. June 6 00:09:19 instant5860 NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) complete. June 6 00:09:19 instant5860 NetworkManager[1456]: (eth1): device state change: failed - disconnected (reason 'none') [120 30 0] June 6 00:09:19 instant5860 NetworkManager[1456]: (eth1): deactivating device (reason 'none') [0] June 6 00:09:19 instant5860 NetworkManager[1456]: Unmanaged Device found; state CONNECTED forced. June----- avahi-daemon[440]: Withdrawing address record for fe80......... on eth1 Jun------avahi-daemon[440]: Leaving mDNS multicast group on interface eth1. IPv6 with address fe80..... Jun------avahi-daemon[440]: Interface eth1.IPv6 no longer relevant for mDNS. Jun------avahi-daemon[440]: Joining mDNS multicast group on interface eth1.IPv6 with address fe80.... Jun------avahi-daemon[440]: New relevant interface eth1.IPv6 for mDNS Jun------avahi-daemon[440]: Registering new address record for fe80..... on eth1.*. Jun - snmpd[1172]: error on subcontainer 'ia_addr' insert (-1) dbusp382]: [syste] Activating service name='org.freedesktop.PackageKit' (using servicehelper) AptDaemon: INFO: Initializing daemon AptDaemon.PackageKit: INFO: Initializing PackageKit compat layer dbus[382]: [system] Successfu;;y activated service 'org.freedesktop.PackageKit' AptDaemon.PackageKit: INFO: Initializing PackageKit transaction AptDaemon.Worker: INFO: Simulating trans: /org/debian/apt/transaction/233beca013a0473ea34d9dea805af5df AptDaemon.Worker: INFO: Processing transaction /org/debian/apt... AptDaemon.PackageKit: INFO: Get updates() AptDaemon.Worker: INFO: Finished snmpd[1172]: error on subcontainer pptpd[23611]: CTRL: Client 82.33.... control connection started pptpd[23611]: CTRL: Starting call (launching pppd, opening GRE) pptpd[23611]: pppd 2.4.5 started by root uid 0 pptpd[23611]: Using interface ppp0 pptpd[23611]: Connect ppp0 <-- /dev/pts/1 NetworkManager[1456]: SCPlugin - Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0) NetworkManager[1456]:SCPlugin - Ifupdown: device added (path: /sys/devices/virtual/net/ppp0, iface: ppp0): no ifupdown configuration found. pptpd[23612]: peer from calling number 82... authorized. kernel: [2918261.416923] init: ufw pre-start process (23613) terminated with status 1 dhclient: DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 7 CTRL: Ignored a SET LING info packet with real ACCMs! local IP address:109.0.121.197 remote IP address: 109.0.84.56 dhclient: DHCPDISCOVER on eth1 to 255.255.255.255 port 67 interval 13 NetworkManager[1456]: (eth1): DHCPv4 request timed out. NetworkManager[1456]: (eth1): canceled DHCP transaction, DHCP client pid 23280 NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) scheduled... NetworkManager[1456]: Activation (eth1) Stage 4 of 5 (IPv4 Configure Timeout) started... NetworkManager[1456]: (eth1): device state change: ip-config - failed (reason 'ip-config-unavailable') [70 120 5[ NetworkManager[1456]: Unmanaged 'ia_addr' insert (-1)[/code]

    Read the article

  • Restricting access to a website by IP address range / domain

    - by test
    Hi, I would like advice on the best way to restrict access to a weba pplication (using .net 2.0 and II6) based on the clients IP address. The two ways I am considering: 1) Through the server side code - check the client I.P against a list of IP addresses within the web.config. 2) Through IIS by creating a virtual directory and restricting the I.P addresses on this virtual directory. My question is if I go the virtual directory route there are a lot of users that access this website and I have read reverse domain lookups made during each client request can be very expensive on server resources. How much of a risj is this? Any other suggestions /ideas to doing this would be much appreciated Thanks advance,

    Read the article

  • Identical spam coming from many different (but similar) IP addresses

    - by DisgruntledGoat
    A forum I run has been the victim of spam user accounts recently - several accounts that have been registered and the profile fill with advertising/links. All of this is for the same company, or group of companies. I deleted several accounts weeks ago and blocked some IP addresses, but today they have come back with the same spam. Every account has a different IP address, but they are all of the form 122.179.*.* or 122.169.*.*. I am considering blocking those two IP ranges, but there are potentially thousands of IPs in that range. They appear to be assigned to India (although the spam is for an American company) so given the site is for a western, English-speaking audience maybe it doesn't matter. My questions: How are they posting on so many IPs? Is there likely to be a limit to the number of IPs they have access to? Is there anything else I can do at the IP-level to block them? (I am looking into other measures like blocking usernames/links.)

    Read the article

  • backlinks: Two domains with same IP

    - by Michal
    I run several different web pages on different servers (with different IP addresses). These pages are linking to each other in order to boost number of back links pointing to my pages. I would like to move all those projects to a single virtual host (with a single IP address). My question is, how google handles links within different domain names but same ip address. Is there some penalization for it? Could this lead to lower page rank?

    Read the article

  • IIS - IP Address and Domain Name Restrictions - not blocking IP addresses

    - by Funky
    I have added an IP address in IIS7 in the IP address and domain restrictions. From what I have read this should block all traffic to the folder apart from the allowed IP address. For some reason this does not work. If I access the section from my work computer all ok, when I access it from my phone I can still see the page. Does anyone have any idea why IIS is not blocking all the other IPs out? Thanks

    Read the article

  • Static IP settings on Windows 2003 server not getting saved

    - by Prashant Mandhare
    We have a Dell PowerEgde 1950 server with Broadcom NetXtreme gigabit ethernet card, and we are facing a strange problem with static IP assignment. When we assign a static IP to this broadcom NIC, settings are not getting saved. Following are the steps to reproduce problem open TCP/IP properties window for broadcom NIC manually enter static IP address and other details like gateway, DNS, etc. apply and close properties dialog. re-open TCP/IP properties windows, you will see your static IP settings lost and changed to "obtain IP address manually" but when checked using ipconfig command, you will still see your same static IP settings but, when checked using ipconfig command after rebooting server, these static ip settings are completely gone and automatically obtained IP is assigned Supplementary information: Recently we had formatted this server and installed windows 2003 from OEM windows setup CD (not from OS installation CD received from Dell). After windows installation was over, broadcom NIC drivers were installed.

    Read the article

  • Live from ODTUG - Big Data and SQL session #2

    - by Jean-Pierre Dijcks
    Sitting in Dominic Delmolino's session at ODTUG (KScope 12). If the session count at conferences is any indication then we will see more and more people start to deploy MapReduce in the database. And yes, that would be with SQL and PL/SQL first and foremost. Both Dominic and our own Bryn Llewellyn are doing MapReduce in the database presentations.  Since I have seen both, I would advice people to first look through Dominic's session to get a good grasp on what mappers do and what reducers do, then dive into Bryn's for a bunch of PL/SQL example. The thing I like about Dominic's is the last slide (a recursive WITH statement) to do this in SQL... Now I am hoping that next year we will see tools vendors show off how they work with Hadoop and MapReduce (at least talking about the concepts!!).

    Read the article

  • Why Oracle Data Integrator for Big Data?

    - by Mala Narasimharajan
    Big Data is everywhere these days - but what exactly is it? It’s data that comes from a multitude of sources – not only structured data, but unstructured data as well.  The sheer volume of data is mindboggling – here are a few examples of big data: climate information collected from sensors, social media information, digital pictures, log files, online video files, medical records or online transaction records.  These are just a few examples of what constitutes big data.   Embedded in big data is tremendous value and being able to manipulate, load, transform and analyze big data is key to enhancing productivity and competitiveness.  The value of big data lies in its propensity for greater in-depth analysis and data segmentation -- in turn giving companies detailed information on product performance, customer preferences and inventory.  Furthermore, by being able to store and create more data in digital form, “big data can unlock significant value by making information transparent and usable at much higher frequency." (McKinsey Global Institute, May 2011) Oracle's flagship product for bulk data movement and transformation, Oracle Data Integrator, is a critical component of Oracle’s Big Data strategy. ODI provides automation, bulk loading, and validation and transformation capabilities for Big Data while minimizing the complexities of using Hadoop.  Specifically, the advantages of ODI in a Big Data scenario are due to pre-built Knowledge Modules that drive processing in Hadoop. This leverages the graphical UI to load and unload data from Hadoop, perform data validations and create mapping expressions for transformations.  The Knowledge Modules provide a key jump-start and eliminate a significant amount of Hadoop development.  Using Oracle Data Integrator together with Oracle Big Data Connectors, you can simplify the complexities of mapping, accessing, and loading big data (via NoSQL or HDFS) but also correlating your enterprise data – this correlation may require integrating across heterogeneous and standards-based environments, connecting to Oracle Exadata, or sourcing via a big data platform such as Oracle Big Data Appliance. To learn more about Oracle Data Integration and Big Data, download our resource kit to see the latest in whitepapers, webinars, downloads, and more… or go to our website on www.oracle.com/bigdata

    Read the article

  • What is the Big-O time complexity of this algorithm

    - by grebwerd
    I was wondering what the run time of this small program would be? #include <stdio.h> int main(int argc, char* argv[]) { int i; int j; int inputSize; int sum = 0; if(argc == 1) inputSize = 16; else inputSize = atoi(argv[i]); for(i = 1; i <= inputSize; i++){ for(j = i; j < inputSize; j *=2 ){ printf("The value of sum is %d\n",++sum); } } } n S floor(log n - log (n-i)) = ? i =1 and that each summation would be the floor value between log(n) - log(n-i). Would the run time be n log n?

    Read the article

  • How can I make outbound requests from two servers that appear to come from the same IP address

    - by Brad
    I am making calls from an ec2 instance to a third party web service (over which I have no control). I would like to be able to scale horizontally, so that I can make these calls from multiple ec2 instances, but the web service I'm calling whitelists my IP, and for the sake of discussion let's assume I can't get another IP address whitelisted. How can I send requests from 2+ machines that appear to the web service to be from the same IP address? Thanks!

    Read the article

  • Ubuntu 12.04 Network Manager: unable to save manual setting to set up a static ip

    - by Andy
    I am fairly familiar with setting up servers, and ubuntu is generally my flavor of choice, but I just installed 12.04 desktop and I am seeing some behavior in network manager that is really puzzling. The network connection works fine if I leave it set on dhcp, but I would like a static IP address for my new web server. When I go into network manager and edit the connection for the one and only NIC I can select MANUAL from the dropdown menu but as soon as I do the Save button becomes greyed out. Even after filling out all fields for the connection it is still grey and I am unable to save the static IP connection information. Any thoughts would be greatly appreciated. I'm hoping there is just some new setting that I am unaware of.... On another note, if I stop the network manager and go edit the interfaces file (and the appropriate hosts/routes/dns files), I do get a static ip address assigned and I can contact my server from the outside, however, the server cannot find the internet. Can't ping even its own ip... I can ping the loopback interface though. I'm really confused on this one. Hoping someone can offer some help.

    Read the article

  • F5 Networks iRule/Tcl - Escaping UNICODE 6-character escape sequences so they are processed as and r

    - by openid.malcolmgin.com
    We are trying to get an F5 BIG-IP LTM iRule working properly with SharePoint 2007 in an SSL termination role. This architecture offloads all of the SSL processing to the F5 and the F5 forwards interactive requests/responses to the SharePoint front end servers via HTTP only (over a secure network). For the purposes of this discussion, iRules are parsed by a Tcl interpretation engine on the F5 Networks BIG-IP device. As such, the F5 does two things to traffic passing through it: Redirects any request to port 80 (HTTP) to port 443 (HTTPS) through HTTP 302 redirects and URL rewriting. Rewrites any response to the browser to selectively rewrite URLs embedded within the HTML so that they go to port 443 (HTTPS). This prevents the 302 redirects from breaking DHTML generated by SharePoint. We've got part 1 working fine. The main problem with part 2 is that in the response rewrite because of XML namespaces and other similar issues, not ALL matches for "http:" can be changed to "https:". Some have to remain "http:". Additionally, some of the "http:" URLs are difficult in that they live in SharePoint-generated JavaScript and their slashes (i.e. "/") are actually represented in the HTML by the UNICODE 6-character string, "\u002f". For example, in the case of these tricky ones, the literal string in the outgoing HTML is: http:\u002f\u002fservername.company.com\u002f And should be changed to: https:\u002f\u002fservername.company.com\u002f Currently we can't even figure out how to get a match in a search/replace expression on these UNICODE sequence string literals. It seems that no matter how we slice it, the Tcl interpreter is interpreting the "\u002f" string into the "/" translation before it does anything else. We've tried various combinations of Tcl escaping methods we know about (mainly double-quotes and using an extra "\" to escape the "\" in the UNICODE string) but are looking for more methods, preferably ones that work. Does anyone have any ideas or any pointers to where we can effectively self-educate about this? Thanks very much in advance.

    Read the article

  • NRF Week - Disney Store Tour

    - by sarah.taylor(at)oracle.com
    Disney has created a real buzz at this year's NRF event. Yesterday morning we began the Oracle Retail Exchange program with a visit to the flagship Disney store in Times Square. Additionally Oracle made a key announcement with Disney  on Oracle Retail's Point of Sale implementation in 330 stores worldwide. Today   Disney's Steve Finney gave a super session on The Magic of Disney at the NRF Big Show. We also saw Disney making an exclusive news announcement about their plans for Global store openings at the Oracle trade show stand - with a little help from Mickey and Minnie Mouse. Disney Stores have been entirely reinvented since the company in 2008 took ownership after previously franchising the retail arm of the business. They have subsequently been a strong Oracle partner and technology has played a key role in their re imagination of the store environment. The new Imagination stores have a 20% higher footfall and margins are up 25%. The Disney brand is synonymous with magical and memorable experiences for children of all ages. The company is achieving a unique retail experience that delights children and shareholders alike! Technology is a key pillar in helping to deliver on both a strong operating model and a unique customer experience - the best thirty minutes in a child's day is their aim. Steve Finney this morning said their technology has to be as reliable as a theme park ride. Store experiences are much more enjoyable when there are short waiting times and children can interact with their favourite characters through magic mirrors, mobile point of sale, touch screens and custom animations that are digitally transmitted to stores globally. The Oracle Retail Point of Sale with iPad touch screens reduces check out times, stores customer data, ensures that promotions are delivered accurately and reduces losses. This means higher levels of guest conversion, increased availability and convenience for customers who want to check availability at other locations. Disney is a pioneer. At NRF's 100th show, we had the privilege of learning from a retailer using technology as a creative force to drive their business forward.

    Read the article

  • Logging IP address for uniqueness without storing the IP address itself for privacy

    - by szabgab
    In a web application when logging some data I'd like to make sure I can identify data that came at differetn times but from the same IP address. On the other hand for privacy concerns as the data will be released publicly I'd like to make sure the actual IP cannot be retrieved. So I need some one way mapping of the IP addresses to some other strings that ensures 1-1 mapping. If I understand correctly then MD5, SHA1 or SHA256 could be a solution. I wonder if they are not too expensive in terms of processing needed? I'd be interested in any solution though if there is implementation in Perl that would be even better.

    Read the article

  • Know a good IP address Geolocation Service?

    - by carrier
    I'd like to get your impressions on any ip geolocation (as in IP to location) service services you may have employed? I'm looking for something free or cheap, which shouldn't be unrealistic because I need to make a very small volume of requests. Anything with python bindings would be especially ideal.

    Read the article

  • Location detecting tecniques for IP addresses

    - by ilhan
    What are the location detecting tecniques for IP adresses? I know to look at the $_SERVER['HTTP_ACCEPT_LANGUAGE'] (not accurate but mostly useful to detect location, for example if an IP range's users set French to their browser then it means that this range) belongs to France and gethostbyaddr($_SERVER['REMOTE_ADDR']) (to look country code top-level domain) then may be to whois gethostbyaddr($_SERVER['REMOTE_ADDR']) sometimes: $HTTP_USER_AGENT (Firefox's user agent string has language code, not accurate but mostly can be used to detect the location) But what about cities?

    Read the article

  • What is a "Public DNS registered IP address"?

    - by Emma
    I've been reading this ICANN agreement on new TLDs and it has this section on DNS service availability, which i don't completely understand: Refers to the ability of the group of listed-as-authoritative name servers of a particular domain name (e.g., a TLD), to answer DNS queries from DNS probes. For the service to be considered available at a particular moment, at least, two of the delegated name servers registered in the DNS must have successful results from “DNS tests” to each of their public-DNS registered “IP addresses” to which the name server resolves. If 51% or more of the DNS testing probes see the service as unavailable during a given time, the DNS service will be considered unavailable. I'm not 100% sure of the part in bold: does "public" refer to the DNS or to the IP addesses? It looks like there's a mistake and that the hyphen should have been after "DNS". So basically does it mean "the public IP addresses registered in the DNS"?

    Read the article

  • Adding IP address to OpenVZ VPS (OpenVZ Web Panel)

    - by andy
    I apologise if I sound at all dumb. This is my first dedicated server having used a VPS for over a year and I'm trying to setup a VPS on this new server. I purchased a subnet from my hosting provider that I believe allows me 6 usable IP addresses: 177.xx.xxx.201 - 177.xx.xxx.206 The subnet address looks like this: 177.xx.xxx.200/29. I've gone on my server and added them like it said on a wiki like so: ip addr add 177.**.***.201/29 dev eth0 I done that for all six and now when I go to them in the browser they point to my server. The problem is, I'm using OpenVZ web panel to create VMs (http://code.google.com/p/ovz-web-panel/) so I created a VM and assigned one of those IPs to it. However when SSHing to that IP it SSH's to the dedicated server and not the VM. Am I missing something?

    Read the article

  • IP address and SEO

    - by Joel
    Hello, I currently host 5 websites within a dedicated server I own. I have several questions: Does it matter if I host all my sites on 100.100.100.100 (the server's IP for example) or if I split them into 100.100.100.100, 100.100.100.101 ... 100.100.100.104 (that is, each site on its own IP). Does it matter if I use a C-Class for each website? Do search engines really care if your site has its own c-class? Do search engines penalize a website if it moves its IP? Thanks, Joel

    Read the article

  • What approach to take for SIMD optimizations

    - by goldenmean
    Hi, I am trying to optimize below code for SIMD operations (8way/4way/2way SIMD whiechever possible and if it gives gains in performance) I am tryin to analyze it first on paper to understand the algorithm used. How can i optimize it for SIMD:- void idct(uint8_t *dst, int stride, int16_t *input, int type) { int16_t *ip = input; uint8_t *cm = ff_cropTbl + MAX_NEG_CROP; int A, B, C, D, Ad, Bd, Cd, Dd, E, F, G, H; int Ed, Gd, Add, Bdd, Fd, Hd; int i; /* Inverse DCT on the rows now */ for (i = 0; i < 8; i++) { /* Check for non-zero values */ if ( ip[0] | ip[1] | ip[2] | ip[3] | ip[4] | ip[5] | ip[6] | ip[7] ) { A = M(xC1S7, ip[1]) + M(xC7S1, ip[7]); B = M(xC7S1, ip[1]) - M(xC1S7, ip[7]); C = M(xC3S5, ip[3]) + M(xC5S3, ip[5]); D = M(xC3S5, ip[5]) - M(xC5S3, ip[3]); Ad = M(xC4S4, (A - C)); Bd = M(xC4S4, (B - D)); Cd = A + C; Dd = B + D; E = M(xC4S4, (ip[0] + ip[4])); F = M(xC4S4, (ip[0] - ip[4])); G = M(xC2S6, ip[2]) + M(xC6S2, ip[6]); H = M(xC6S2, ip[2]) - M(xC2S6, ip[6]); Ed = E - G; Gd = E + G; Add = F + Ad; Bdd = Bd - H; Fd = F - Ad; Hd = Bd + H; /* Final sequence of operations over-write original inputs. */ ip[0] = (int16_t)(Gd + Cd) ; ip[7] = (int16_t)(Gd - Cd ); ip[1] = (int16_t)(Add + Hd); ip[2] = (int16_t)(Add - Hd); ip[3] = (int16_t)(Ed + Dd) ; ip[4] = (int16_t)(Ed - Dd ); ip[5] = (int16_t)(Fd + Bdd); ip[6] = (int16_t)(Fd - Bdd); } ip += 8; /* next row */ } ip = input; for ( i = 0; i < 8; i++) { /* Check for non-zero values (bitwise or faster than ||) */ if ( ip[1 * 8] | ip[2 * 8] | ip[3 * 8] | ip[4 * 8] | ip[5 * 8] | ip[6 * 8] | ip[7 * 8] ) { A = M(xC1S7, ip[1*8]) + M(xC7S1, ip[7*8]); B = M(xC7S1, ip[1*8]) - M(xC1S7, ip[7*8]); C = M(xC3S5, ip[3*8]) + M(xC5S3, ip[5*8]); D = M(xC3S5, ip[5*8]) - M(xC5S3, ip[3*8]); Ad = M(xC4S4, (A - C)); Bd = M(xC4S4, (B - D)); Cd = A + C; Dd = B + D; E = M(xC4S4, (ip[0*8] + ip[4*8])) + 8; F = M(xC4S4, (ip[0*8] - ip[4*8])) + 8; if(type==1){ //HACK E += 16*128; F += 16*128; } G = M(xC2S6, ip[2*8]) + M(xC6S2, ip[6*8]); H = M(xC6S2, ip[2*8]) - M(xC2S6, ip[6*8]); Ed = E - G; Gd = E + G; Add = F + Ad; Bdd = Bd - H; Fd = F - Ad; Hd = Bd + H; /* Final sequence of operations over-write original inputs. */ if(type==0){ ip[0*8] = (int16_t)((Gd + Cd ) >> 4); ip[7*8] = (int16_t)((Gd - Cd ) >> 4); ip[1*8] = (int16_t)((Add + Hd ) >> 4); ip[2*8] = (int16_t)((Add - Hd ) >> 4); ip[3*8] = (int16_t)((Ed + Dd ) >> 4); ip[4*8] = (int16_t)((Ed - Dd ) >> 4); ip[5*8] = (int16_t)((Fd + Bdd ) >> 4); ip[6*8] = (int16_t)((Fd - Bdd ) >> 4); }else if(type==1){ dst[0*stride] = cm[(Gd + Cd ) >> 4]; dst[7*stride] = cm[(Gd - Cd ) >> 4]; dst[1*stride] = cm[(Add + Hd ) >> 4]; dst[2*stride] = cm[(Add - Hd ) >> 4]; dst[3*stride] = cm[(Ed + Dd ) >> 4]; dst[4*stride] = cm[(Ed - Dd ) >> 4]; dst[5*stride] = cm[(Fd + Bdd ) >> 4]; dst[6*stride] = cm[(Fd - Bdd ) >> 4]; }else{ dst[0*stride] = cm[dst[0*stride] + ((Gd + Cd ) >> 4)]; dst[7*stride] = cm[dst[7*stride] + ((Gd - Cd ) >> 4)]; dst[1*stride] = cm[dst[1*stride] + ((Add + Hd ) >> 4)]; dst[2*stride] = cm[dst[2*stride] + ((Add - Hd ) >> 4)]; dst[3*stride] = cm[dst[3*stride] + ((Ed + Dd ) >> 4)]; dst[4*stride] = cm[dst[4*stride] + ((Ed - Dd ) >> 4)]; dst[5*stride] = cm[dst[5*stride] + ((Fd + Bdd ) >> 4)]; dst[6*stride] = cm[dst[6*stride] + ((Fd - Bdd ) >> 4)]; } } else { if(type==0){ ip[0*8] = ip[1*8] = ip[2*8] = ip[3*8] = ip[4*8] = ip[5*8] = ip[6*8] = ip[7*8] = ((xC4S4 * ip[0*8] + (IdctAdjustBeforeShift<<16))>>20); }else if(type==1){ dst[0*stride]= dst[1*stride]= dst[2*stride]= dst[3*stride]= dst[4*stride]= dst[5*stride]= dst[6*stride]= dst[7*stride]= cm[128 + ((xC4S4 * ip[0*8] + (IdctAdjustBeforeShift<<16))>>20)]; }else{ if(ip[0*8]){ int v= ((xC4S4 * ip[0*8] + (IdctAdjustBeforeShift<<16))>>20); dst[0*stride] = cm[dst[0*stride] + v]; dst[1*stride] = cm[dst[1*stride] + v]; dst[2*stride] = cm[dst[2*stride] + v]; dst[3*stride] = cm[dst[3*stride] + v]; dst[4*stride] = cm[dst[4*stride] + v]; dst[5*stride] = cm[dst[5*stride] + v]; dst[6*stride] = cm[dst[6*stride] + v]; dst[7*stride] = cm[dst[7*stride] + v]; } } } ip++; /* next column */ dst++; } }

    Read the article

  • Willy Rotstein on Supply Chain Planning

    - by sarah.taylor(at)oracle.com
    Each time a merchandiser, buyer or planner in Retail makes a business decision around assortment, inventory, pricing and promotions there is an opportunity to improve both Profitability and Customer Service. Improving decision making, however, has always been a tricky business for retailers.  I have worked in this space for more than 15 years. I began my career as an academic, at Imperial College London, and then broadened this interest with Retailers, aiming to optimize their merchandising and supply chain decisions. Planning the business and optimizing profit is a complex process. The complexity arises from the variety of people involved, the large number of decisions to take across all business processes, the uncertainty intrinsic to the retail environment as well as the volume of data available for analysis.  Things are not getting any easier either. The advent of multi-channel, social media and mobile is taking these complexities to a new level and presenting additional opportunities for those willing to exploit them. I guess it is due to the complexities of the decision making process that, over the last couple of years working with Oracle Retail, I have witnessed a clear trend around the deployment of planning systems. Retailers are aiming to simplify their decision making processes. They want to use one joined up planning platform across the business and enhance it with "actionable" data mining and optimization techniques. At Oracle Retail, we have a vibrant community of international retailers who regularly come together to discuss the big issues in retail planning. It is a combination of fashion, grocery and speciality retailers, all sharing their best practice vision for planning and optimizing merchandise decisions. As part of the Retail Exchange program, at the recent National Retail Federation event in New York, I jointly hosted a Planning dinner with Peter Fitzgerald from Google UK, Retail Division. Those retailers from our international planning community who were in New York for the annual NRF event were able to attend. The group comprised some of Europe's great International Retail brands.  All sectors were represented by organisations like Mango, LVMH, Ahold, Morrisons, Shop Direct and River Island. They confirmed the current importance of engaging with Planning and Optimization issues. In particular the impact of the internet was a key topic. We had a great debate about new retail initiatives.  Peter highlighted how mobility is changing retail - in particular with the new "local availability search" initiative. We also had an exciting discussion around the opportunities to improve merchandising using the new data that is becoming available from search, social media and ecommerce sites. It will be our focus to continue to help retailers translate this data into better results while keeping their business operations simple. New developments in "actionable" analytics and computing capacity make this a very exciting area today. Watch this space for my contributions on these topics which will be made available through this blog. Oracle Retail has a strong Planning community. if you are a category manager, a planner, a buyer, a merchandiser, a retail supplier or any retail executive with a keen interest in planning then you would be very welcome to join Oracle Retail's Planning Community. As part of our community you will be able to join our in-person and virtual events, download topical white papers and best practice information specifically tailored to your area of interest.  If anyone would like to register their interest in joining our community of retailers discussing planning then please contact me at [email protected]   Willy Rotstein, Oracle Retail

    Read the article

  • Why CFOs Should Care About Big Data

    - by jmorourke
    The topic of “big data” clearly has reached a tipping point in 2012.  With plenty of coverage over the past few years in the IT press, we are now starting to see the topic of “big data” covered in mainstream business press, including a cover story in the October 2012 issue of the Harvard Business Review.  To help customers understand the challenges of managing “big data” as well as the opportunities that can be created by leveraging “big data”, Oracle has recently run and published the results of a customer survey, as well as white papers and articles on this topic.  Most recently, we commissioned a white paper titled “Mastering Big Data: CFO Strategies to Transform Insight into Opportunity”. The premise here is that “big data” is not just a topic that CIOs should pay attention to, but one that CFOs should understand and take advantage of as well.  Clearly, whoever masters the art and science of big data will be positioned for competitive advantage in their industries or markets.  That’s why smart CFOs are taking control of big data and business analytics projects, not just to uncover new ways to drive growth in a slowing global economy, but also to be a catalyst for change in the enterprise.  With an increasing number of CFOs now responsible for overseeing IT investments and providing strategic insight to the board, CFOs will be increasingly called upon to take a leadership role in assessing the value of “big data” initiatives, building on their traditional skills in reporting and helping managers analyze data to support decision making. Here’s a link to the white paper referenced above, which is posted on the Oracle C-Central/CFO web site, as well as some other resources that can help CFOs master the topic of “big data”: White Paper “Mastering Big Data:  CFO Strategies to Transform Insight into Opportunity CFO Market Watch article:  “Does Big Data Affect the CFO?” Oracle Survey Report:  “From Overload to Impact – An Industry Scorecard on Big Data Industry Challenges” Upcoming Big Data Webcast with Andrew McAfee Here’s a general link to Oracle C-Central/CFO in case you want to start there: www.oracle.com/c-central/cfo Feel free to contact me if you have any questions or need additional information:  [email protected]

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >