Search Results

Search found 7061 results on 283 pages for 'target'.

Page 156/283 | < Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >

  • Using JavaScript for basic HTML layout

    - by Slobaum
    I've been doing HTML layout as well as programming for many years and I'm seeing a growing issue recently. Folks who primarily do HTML layout are becoming increasingly more comfortable using JavaScript to solve basic page layout problems. Rather than consider what HTML is capable of doing (to hit their target browsers), they're slapping on bloated JS frameworks that "fix" fairly basic problems. Let's get this out of the way right here: I find this practice annoying and often inconsiderate of those with special accessibility needs. Unfortunately, when you try to tell these folks that what they're doing isn't semantic, ideal, or possibly even a good idea, they always counter with the same old arguments: "JavaScript has a market saturation of 98%, we don't care about the other 2%." or "Who doesn't have JavaScript enabled these days?" or simply "We don't care about those users." I find that remarkably short-sighted. I would really like the opinion of the community at large. What do you think, am I holding too fast to a dying ideal? I am willing to accept that, but would like to be given good arguments as to why I should disregard 2%+ of my user base (among others). Is JavaScript's prevalence a good excuse to use a programmatic language to do basic layout, thus mucking up your behavior and layout? jQuery and similar "behavior" based frameworks are blurring the lines, especially for those who don't realize the difference. Honestly (and probably most importantly), I would like some "argument ammo" to use against these folks when the "it's the right way to do it" argument is unacceptable. Can you cite sources outlining your stance, please? Thanks everybody, please be civil :)

    Read the article

  • Right-Time Retail Part 3

    - by David Dorf
    This is part three of the three-part series.  Read Part 1 and Part 2 first. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Marketing Real-time isn’t just about executing faster; it extends to interactions with customers as well. As an industry, we’ve spent many years analyzing all the data that’s been collected. Yes, that data has been invaluable in helping us make better decisions like where to open new stores, how to assort those stores, and how to price our products. But the recent advances in technology are now making it possible to analyze and deliver that data very quickly… fast enough to impact a potential sale in near real-time. Let me give you two examples. Salesmen in car dealerships get pretty good at sizing people up. When a potential customer walks in the door, it doesn’t take long for the salesman to figure out the revenue at stake. Is this person a real buyer, or just looking for a fun test drive? Will this person buy today or three months from now? Will this person opt for the expensive packages, or go bare bones? While the salesman certainly asks some leading questions, much of information is discerned through body language. But body language doesn’t translate very well over the web. Eloqua, which was acquired by Oracle earlier this year, reads internet body language. By tracking the behavior of the people visiting your web site, Eloqua categorizes visitors based on their propensity to buy. While Eloqua’s roots have been in B2B, we’ve been looking at leveraging the technology with ATG to target B2C. Knowing what sites were previously visited, how often the customer has been to your site recently, and how long they’ve spent searching can help understand where the customer is in their purchase journey. And knowing that bit of information may be enough to help close the deal with a real-time offer, follow-up email, or online customer service pop-up. This isn’t so different from the days gone by when the clerk behind the counter of the corner store noticed you were lingering in a particular aisle, so he walked over to help you compare two products and close the sale. You appreciated the personalized service, and he knew the value of the long-term relationship. Move that same concept into the digital world and you have Oracle’s CX Suite, a cloud-based offering of end-to-end customer experience tools, assembled primarily from acquisitions. Those tools are Oracle Marketing (Eloqua), Oracle Commerce (ATG, Endeca), Oracle Sales (Oracle CRM On Demand), Oracle Service (RightNow), Oracle Social (Collective Intellect, Vitrue, Involver), and Oracle Content (Fatwire). We are providing the glue that binds the CIO and CMO together to unleash synergies that drive the top-line higher, and by virtue of the cloud-approach, keep costs at bay. My second example of real-time marketing takes place in the store but leverages the concepts of Web marketing. In 1962 the decline of personalized service in retail began. Anyone know the significance of that year? That’s when Target, K-Mart, and Walmart each opened their first stores, and over the succeeding years the industry chose scale over personal service. No longer were you known as “Jane with the snotty kid so make sure we check her out fast,” but you suddenly became “time-starved female age 20-30 with kids.” I’m not saying that was a bad thing – it was the right thing for our industry at the time, and it enabled a huge amount of growth, cheaper prices, and more variety of products. But scale alone is no longer good enough. Today’s sophisticated consumer demands scale, experience, and personal attention. To some extent we’ve delivered that on websites via the magic of cookies, your willingness to log in, and sophisticated data analytics. What store manager wouldn’t love a report detailing all the visitors to his store, where they came from, and which products that examined? People trackers are getting more sophisticated, incorporating infrared, video analytics, and even face recognition. (Next time you walk in front on a mannequin, don’t be surprised if it’s looking back.) But the ultimate marketing conduit is the mobile phone. Since each mobile phone emits a unique number on WiFi networks, it becomes the cookie of the physical world. Assuming congress keeps privacy safeguards reasonable, we’ll have a win-win situation for both retailers and consumers. Retailers get to know more about the consumer’s purchase journey, and consumers get higher levels of service with the retailer. When I call my bank, a couple things happen before the call is connected. A reverse look-up on my phone number identifies me so my accounts can be retrieved from Siebel CRM. Then the system anticipates why I’m calling based on recent transactions. In this example, it sees that I was just charged a foreign currency fee, so it assumes that’s the reason I’m calling. It puts all the relevant information on the customer service rep’s screen as it connects the call. When I complain about the fee, the rep immediately sees I’m a great customer and I travel lots, so she suggests switching me to their traveler’s card that doesn’t have foreign transaction fees. That technology is powered by a product called Oracle Real-Time Decisions, a rules engine built to execute very quickly, basically in the time it takes the phone to ring once. So let’s combine the power of that product with our new-found mobile cookie and provide contextual customer interactions in real-time. Our first opportunity comes when a customer crosses a pre-defined geo-fence, typically a boundary around the store. Context is the key to our interaction: that’s the customer (known or anonymous), the time of day and day of week, and location. Thomas near the downtown store on a Wednesday at noon means he’s heading to lunch. If he were near the mall location on a Saturday morning, that’s a completely different context. But on his way to lunch, we’ll let Thomas know that we’ve got a new shipment of ASICS running shoes on display with a simple text message. We used the context to look-up Thomas’ past purchases and understood he was an avid runner. We used the fact that this was lunchtime to select the type of message, in this case an informational message instead of an offer. Thomas enters the store, phone in hand, and walks to the shoe department. He scans one of the new ASICS shoes using the convenient QR Codes we provided on the shelf-tags, but then he starts scanning low-end Nikes. Each scan is another opportunity to both learn from Thomas and potentially interact via another message. Since he historically buys low-end Nikes and keeps scanning them, he’s likely falling back into his old ways. Our marketing rules are currently set to move loyal customer to higher margin products. We could have set the dials to increase visit frequency, move overstocked items, increase basket size, or many other settings, but today we are trying to move Thomas to higher-margin products. We send Thomas another text message, this time it’s a personalized offer for 10% off ASICS good for 24 hours. Offering him a discount on Nikes would be throwing margin away since he buys those anyway. We are using our marketing dollars to change behavior that increases the long-term value of Thomas. He decides to buy the ASICS and scans the discount code on his phone at checkout. Checkout is yet another opportunity to interact with Thomas, so the transaction is sent back to Oracle RTD for evaluation. Since Thomas didn’t buy anything with the shoes, we’ll print a bounce-back coupon on the receipt offering 30% off ASICS socks if he returns within seven days. We have successfully started moving Thomas from low-margin to high-margin products. In both of these marketing scenarios, we are able to leverage data in near real-time to decide how best to interact with the customer and lead to an increase in the lifetime value of the customer. The key here is acting at the moment the customer shows interest using the context of the situation. We aren’t pushing random products at haphazard times. We are tailoring the marketing to be very specific to this customer, and it’s the technology that allows this to happen in near real-time. Conclusion As we enable more right-time integrations and interactions, retailers will begin to offer increased service to their customers. Localized and personalized service at scale will drive loyalty and lead to meaningful revenue growth for the retailers that execute well. Our industry needs to support Commerce Anywhere…and commerce anytime as well.

    Read the article

  • Writing a "Hello World" Device Driver for kernel 2.6 using Eclipse

    - by Isaac
    Goal I am trying to write a simple device driver on Ubuntu. I want to do this using Eclipse (or a better IDE that is suitable for driver programming). Here is the code: #include <linux/module.h> static int __init hello_world( void ) { printk( "hello world!\n" ); return 0; } static void __exit goodbye_world( void ) { printk( "goodbye world!\n" ); } module_init( hello_world ); module_exit( goodbye_world ); My effort After some research, I decided to use Eclipse CTD for developing the driver (while I am still not sure if it supports multi-threading debugging tools). So I: Installed Ubuntu 11.04 desktop x86 on a VMWare virtual machine, Installed eclipse-cdt and linux-headers-2.6.38-8 using Synaptic Package Manager, Created a C Project named TestDriver1 and copy-pasted above code to it, Changed the default build command, make, to the following customized build command: make -C /lib/modules/2.6.38-8-generic/build M=/home/isaac/workspace/TestDriver1 The problem I get an error when I try to build this project using eclipse. Here is the log for the build: **** Build of configuration Debug for project TestDriver1 **** make -C /lib/modules/2.6.38-8-generic/build M=/home/isaac/workspace/TestDriver1 all make: Entering directory '/usr/src/linux-headers-2.6.38-8-generic' make: *** No rule to make target vmlinux', needed byall'. Stop. make: Leaving directory '/usr/src/linux-headers-2.6.38-8-generic' Interestingly, I get no error when I use shell instead of eclipse to build this project. To use shell, I just create a Makefile containing obj-m += TestDriver1.o and use the above make command to build. So, something must be wrong with the eclipse Makefile. Maybe it is looking for the vmlinux architecture (?) or something while current architecture is x86. Maybe it's because of VMWare? As I understood, eclipse creates the makefiles automatically and modifying it manually would cause errors in the future OR make managing makefile difficult. So, how can I compile this project on eclipse?

    Read the article

  • JavaServer Faces 2.0 for the Cloud

    - by Janice J. Heiss
    A new article now up on otn/java by Deepak Vohra titled “JSF 2.0 for the Cloud, Part One,” shows how JavaServer Faces 2.0 provides features ideally suited for the virtualized computing resources of the cloud. The article focuses on @ManagedBean annotation, implicit navigation, and resource handling. Vohra illustrates how the container-based model found in Java EE 7, which allows portable applications to target single machines as well as large clusters, is well suited to the cloud architecture. From the article-- “Cloud services might not have been a factor when JavaServer Faces 2.0 (JSF 2.0) was developed, but JSF 2.0 provides features ideally suited for the cloud, for example:•    The path-based resource handling in JSF 2.0 makes handling virtualized resources much easier and provides scalability with composite components.•    REST-style GET requests and bookmarkable URLs in JSF 2.0 support the cloud architecture. Representational State Transfer (REST) software architecture is based on transferring the representation of resources identified by URIs. A RESTful resource or service is made available as a URI path. Resources can be accessed in various formats, such as XML, HTML, plain text, PDF, JPEG, and JSON, among others. REST offers the advantages of being simple, lightweight, and fast.•    Ajax support in JSF 2.0 is integrable with Software as a Service (SaaS) by providing interactive browser-based Web applications.” In Part Two of the series, Vohra will examine features such as Ajax support, view parameters, preemptive navigation, event handling, and bookmarkable URLs.Have a look at the article here.

    Read the article

  • Stencil mask with AlphaTestEffect

    - by Brendan Wanlass
    I am trying to pull off the following effect in XNA 4.0: http://eng.utah.edu/~brendanw/question.jpg The purple area has 50% opacity. I have gotten pretty close with the following code: public static DepthStencilState AlwaysStencilState = new DepthStencilState() { StencilEnable = true, StencilFunction = CompareFunction.Always, StencilPass = StencilOperation.Replace, ReferenceStencil = 1, DepthBufferEnable = false, }; public static DepthStencilState EqualStencilState = new DepthStencilState() { StencilEnable = true, StencilFunction = CompareFunction.Equal, StencilPass = StencilOperation.Keep, ReferenceStencil = 1, DepthBufferEnable = false, }; ... if (_alphaEffect == null) { _alphaEffect = new AlphaTestEffect(_spriteBatch.GraphicsDevice); _alphaEffect.AlphaFunction = CompareFunction.LessEqual; _alphaEffect.ReferenceAlpha = 129; Matrix projection = Matrix.CreateOrthographicOffCenter(0, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferWidth, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferHeight, 0, 0, 1); _alphaEffect.Projection = world.SystemManager.GetSystem<RenderSystem>().Camera.View * projection; } _mask = new RenderTarget2D(_spriteBatch.GraphicsDevice, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferWidth, _spriteBatch.GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Color, DepthFormat.Depth24Stencil8); _spriteBatch.GraphicsDevice.SetRenderTarget(_mask); _spriteBatch.GraphicsDevice.Clear(ClearOptions.Target | ClearOptions.Stencil, Color.Transparent, 0, 0); _spriteBatch.Begin(SpriteSortMode.Immediate, null, null, AlwaysStencilState, null, _alphaEffect); _spriteBatch.Draw(sprite.Texture, position, sprite.SourceRectangle,Color.White, 0f, sprite.Origin, 1f, SpriteEffects.None, 0); _spriteBatch.End(); _spriteBatch.Begin(SpriteSortMode.Immediate, null, null, EqualStencilState, null, null); _spriteBatch.Draw(_maskTex, new Vector2(x * _maskTex.Width, y * _maskTex.Height), null, Color.White, 0f, Vector2.Zero, 1f, SpriteEffects.None, 0); _spriteBatch.End(); _spriteBatch.GraphicsDevice.SetRenderTarget(null); _spriteBatch.GraphicsDevice.Clear(Color.Black); _spriteBatch.Begin(); _spriteBatch.Draw((Texture2D)_mask, Vector2.Zero, null, Color.White, 0f, Vector2.Zero, 1f, SpriteEffects.None, layer/max_layer); _spriteBatch.End(); My problem is, I can't get the AlphaTestEffect to behave. I can either mask over the semi-transparent purple junk and fill it in with the green design, or I can draw over the completely opaque grassy texture. How can I specify the exact opacity that needs to be replace with the green design?

    Read the article

  • What is the most time-effective way to monitor & manage threats from bots and/or humans?

    - by CheeseConQueso
    I'm usually overwhelmed by the amount of tools that hosting companies provide to track & quantify traffic data and statistics. I'm equally overwhelmed by the countless flavors of malicious 'attacks' that target any and every web site known to man. The security methods used to protect both the back and front end of a website are documented well and are straight-forward in terms of ease of implementation and application, but the army of autonomous bots knows no boundaries and will always find a niche of a website to infest. So what can be done to handle the inevitable swarm of bots that pound your domain with brute force? Whenever I look at error logs for my domains, there are always thousands of entries that look like bots trying to sneak sql code into the database by tricking the variables in the url into giving them schema information or private data within the database. My barbaric and time-consuming plan of defense is just to monitor visitor statistics for those obvious patterns of abuse and either ban the ips or range of ips accordingly. Aside from that, I don't know much else I could do to prevent all of the ping pong going on all day. Are there any good tools that automatically monitor this background activity (specifically activity that throws errors on the web & db server) and proactively deal with these source(s) of mayhem?

    Read the article

  • Can defect containment metrics be readily applied at an organizational level when there is only a consistant organizational process framework?

    - by Thomas Owens
    Defect containment metrics, such as total defect containment effectiveness (TDCE) and phase containment effectiveness (PCE), can be used to give a good indicator of the quality of the process. TDCE captures the defects that are captured at some point between requirements and the release of a product into the field, indicating the overall effectiveness of the entire process to find and remove defects. PCE provides more detail at each phase of the software development life cycle and how the defect detection and removal techniques are working. Applying these metrics makes sense at a level where you have a well-defined process and methodology for product development, often a project. However, some organizations provide a process framework that is tailored at the project level. This process framework would include the necessary guidance for meeting certifications (ISO9001, CMMI), practices for incorporating known good techniques (agile methods, Lean, Six Sigma), and requirements for legal or regulatory reasons. However, the specific details of how to gather requirements, design the system, produce the software, conduct test, and release are left to the product development teams. Is there any effective way to apply defect containment metrics at an organizational level when only a process framework exists at the organizational level? If not, what might be some ideas for metrics that can be distilled from each project (each using a tailored process that fits into the organizational process framework) that captures defect containment metrics to discuss the ability of the process to find and remove defects? The end goal of such a metric would be to consolidate the defect containment practices of a large number of ongoing projects and report to management. The target audience would be people in roles such as the chief software engineer and the chief engineer (of all engineering disciplines) for the organization. Although project specific data would be available, the idea is to produce something that quantifies the general effectiveness of all tailored processes across all ongoing projects. I would suspect that this data would also be presented as part of CMMI, ISO, or similar audits to demonstrate process quality.

    Read the article

  • Problems Using CloudFlare On Blogger

    - by the_archer
    Here's the situation. I got a TLD for my blogger blog and set it up using the instructions from blogger. Blogger asks to: Add two CNAME records. For the first CNAME, where it says Name, Label or Host enter "www" and where it says Destination, Target orPoints To enter "ghs.google.com" . For the second CNAME, enter "NHRILA4K2RJG" as the Name and "gv-GQMUMYGHAMJWECXFLJXVXABIV23C55JIPNIAVD5IGFSXT653O5GA.domainverify.googlehosted.com." I did that on my domain host, and everything was working smoothly. Here's the things that happened: Typing myblog.blogspot.com in the address bar brought me to my new address www.mynewaddress.tld Typing my newaddress.tld brings me to www.mynewaddress.tld Now, I went through the instruction to setup CloudFlare and did everything as required. I saw that CloudFlare is active and working on my TLD www.mynewaddress.tld, however, when I am typing the blogspot address, i.e. myblog.blogspot.com, it's showing a notice that the blog is not hosted on blogger and that I should click "yes" to get redirected to the new website. However, the blog is still on blogger. I think the problem might be with this particular CNAME record Google asks to create, which I did not find imported to the CloudFlare nameservers: For the second CNAME, enter "NHRILA4K2RJG" as the Name and "gv-GQMUMYGHAMJWECXFLJXVXABIV23C55JIPNIAVD5IGFSXT653O5GA.domainverify.googlehosted.com." So I create that CNAME and added it to the CloudFlare panel. My question is - is that what will help Google determine that my blog is still hosted on Blogger? If so, should I turn off CloudFlare for that particular CNAME record or turn it on? Any help is very much appreciated :)

    Read the article

  • wired connection not working in ubuntu 12.04 on lenovo G580 laptop

    - by shravankumar
    I found solution in http://www.zyxware.com/articles/2680/solved-wired-connection-eth0-not-detected-in-ubuntu-12-04 I downloaded compact-wireless-2012-07-03-p.tar.bz2 Here the steps i followed along with output 1. shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ scripts/driver-select alx Output: Processing new driver-select request... Backup exists: Makefile.bk Backup exists: Makefile.bk Backup exists: drivers/net/ethernet/broadcom/Makefile.bk Backup exists: drivers/net/ethernet/atheros/Makefile.bk Backup exists: Makefile.bk Backup exists: Makefile.bk Backup exists: drivers/net/ethernet/broadcom/Makefile.bk 2.shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ make output: make -C /lib/modules/3.2.0-23-generic/build M=/home/shravankumar/Desktop/compat-wireless-2012-07-03-p modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-23-generic' scripts/Makefile.build:44: /home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx/Makefile: No such file or directory make[4]: *** No rule to make target `/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx/Makefile'. Stop. make[3]: *** [/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros/alx] Error 2 make[2]: *** [/home/shravankumar/Desktop/compat-wireless-2012-07-03-p/drivers/net/ethernet/atheros] Error 2 make[1]: *** [_module_/home/shravankumar/Desktop/compat-wireless-2012-07-03-p] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-23-generic' make: *** [modules] Error 2 3. hravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ make install output: FATAL: Could not open /lib/modules/3.2.0-23-generic/modules.dep.temp for writing: Permission denied make: *** [uninstall] Error 1 4. shravankumar@shravankumar-Lenovo-G580:~/Desktop/compat-wireless-2012-07-03-p$ modeprobe alx output: No command 'modeprobe' found, did you mean: Command 'modprobe' from package 'module-init-tools' (main) modeprobe: command not found I am new to Ubuntu ,Please help me. Thanks in advance

    Read the article

  • How to create a PPA for C++ program?

    - by piotr
    My questions are: c++/gtkmm project created with NetBeans. How to make package to PPA from this? I have created target files structure (*.desktop, iconfile, ui glade files). Binary goes to /opt/extras.ubuntu.com/myagenda/bin/myagenda. There is also a folder of glade files, that must go to /opt/extras.ubuntu.com/myagenda/bin/myagenda/ui. Desktop file goes to /usr/share/applications/myagenda.desktop. Icon goes to /usr/share/icons/hicolor/scalable/apps/myagenda.svg As you see, there is really small amount of files. Now, how to manage all this stuff, to create package on PPA, which knows where and how put this files to their targets? +-- opt ¦   +-- extras.ubuntu.com ¦   +-- myagenda ¦   +-- bin ¦   ¦   +-- myagenda ¦   +-- ui ¦   +-- item_btn_delete.png ¦   +-- item_btn_edit.png ¦   +-- myagenda.png ¦   +-- myagenda.svg ¦   +-- reminder.png ¦   +-- ui.glade +-- usr +-- share +-- applications ¦   +-- myagenda.desktop +-- icons +-- hicolor +-- scalable +-- apps +-- myagenda.svg Update: Created install file in debian directory with targets: data/myagenda /opt/extras.ubuntu/com/myagenda/bin data/ui/* /opt/extras.ubuntu/com/myagenda/ui data/myagenda.desktop /usr/share/applications data/myagenda.svg /usr/share/icons/hicolor/scalable/apps After dpkg-buildpackage it builds, but for amd64 architecture. Now, trying to change that to i386.

    Read the article

  • Cannot FTP without simultaneous SSH connection?

    - by Lucas
    I'm trying to set up an old box as a backup server (running 10.04.4 LTS). I intend to use 3rd party software on my PC to periodically connect to my server via FTP(S) and to mirror certain files. For some reason, all FTP connection attempts fail UNLESS I'm simultaneously connected via SSH. For example, if I use putty to test the connection to port 21, the system hangs and times out. I get: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] <cursor> However, when I'm simultaneously logged in (in another session) everything works: 220 Connected to LeServer USER lucas 331 Please specify the password. PASS [password] 230 Login successful. Basically, this means that my software will never be able to connect on its own, as intended. I know that the correct port is open because it works (sometimes) and nmap gives me: Starting Nmap 5.00 ( http://nmap.org ) at 2012-03-20 16:15 CDT Interesting ports on xx.xxx.xx.x: Not shown: 995 closed ports PORT STATE SERVICE 21/tcp open ftp 22/tcp open ssh 53/tcp open domain 139/tcp open netbios-ssn 445/tcp open microsoft-ds Nmap done: 1 IP address (1 host up) scanned in 0.15 seconds My only hypothesis is that this has something to do with iptables. Maybe it's allowing only established connections? I don't think that's how I set it up, but maybe? Here's my iptables rules for INPUT: lucas@rearden:~$ sudo iptables -L INPUT Chain INPUT (policy DROP) target prot opt source destination fail2ban-ssh tcp -- anywhere anywhere multiport dports ssh ufw-before-logging-input all -- anywhere anywhere ufw-before-input all -- anywhere anywhere ufw-after-input all -- anywhere anywhere ufw-after-logging-input all -- anywhere anywhere ufw-reject-input all -- anywhere anywhere ufw-track-input all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:ftp I'm using vsftpd. Any thoughts/resources on how I could fix this? L

    Read the article

  • IOS OpenGl transparency performance issue

    - by user346443
    I have built a game in Unity that uses OpenGL ES 1.1 for IOS. I have a nice constant frame rate of 30 until i place a semi transparent texture over the top on my entire scene. I expect the drop in frames is due to the blending overhead with sorting the frame buffer. On 4s and 3gs the frames stay at 30 but on the iPhone 4 the frame rate drops to 15-20. Probably due to the extra pixels in the retina compared to the 3gs and smaller cpu/gpu compared to the 4s. I would like to know if there is anything i can do to try and increase the frame rate when a transparent texture is rendered on top of the entire scene. Please not the the transparent texture overlay is a core part of the game and i can't disable anything else in the scene to speed things up. If its guaranteed to make a difference I guess I can switch to OpenGl ES 2.0 and write the shaders but i would prefer not to as i need to target older devices. I should add that the depth buffer is disabled and I'm blending using SrcAlpha One. Any advice would be highly appreciated. Cheers

    Read the article

  • How can I make my FireFox Browser AddOn Backwards Compatible?

    - by bwheeler96
    I'm making a firefox browser add-on, and I just got all of the code working fine, but it will only install in FireFox 16, and I want it to be compatible at least from 10+, has anyone dealt with this issue? I have my package.json pointing to my install.rdf, and my install.rdf clearly states target applications. Is there any additional setup I need? here is my package.json { "name": "firefox-ext", "license": "MPL 2.0", "author": "", "version": "0.1", "fullName": "firefox-ext", "id": "jid1-AMCw25iQJof53w", "description": "a basic add-on" } and here is my install.rdf. <?xml version="1.0"?> <RDF xmlns="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:em="http://www.mozilla.org/2004/em-rdf#"> <Description about="urn:mozilla:install-manifest"> <em:id>jid1-AMCw25iQJof53w</em:id> <em:name>Generic App</em:name> <em:version>1.0</em:version> <em:type>2</em:type> <em:creator>Brian Wheeler</em:creator> <em:description>Good Stuff</em:description> <em:targetApplication> <Description> <em:id>{ec8030f7-c20a-464f-9b0e-13a3a9e97384}</em:id> <em:minVersion>1.0</em:minVersion> <em:maxVersion>19.0</em:maxVersion> </Description> </em:targetApplication> </Description> </RDF> I'm using the CFX CLI Tools to make this, so everything has been built and tested with cfx init, cfx run, and cfx xpi I just can't figure out compatibility with anything other than 16. Also, major bonus points if someone could explain the advantages of a rapid release cycle, because it really seems to have shot 3rd party Mozilla developers in the foot in terms of software compatibility. Thanks, -Brian

    Read the article

  • What to use for simple cross-platform games instead of Flash?

    - by jmh_gr
    In short, for simple games: Is Flash still a good option for browser-based PC clients? It still has 90%+ penetration. What is a good alternative for mobile devices? It HTML5 + JavaScript the choice for mobile? Or does one have to learn a new native language for each target platform? (Android, Apple, Windows Phone)... If you desire further background: There are more blogs about the official demise of mobile Flash than I can count, along with endless useless and vitriolic comments. I'm actually trying to do something practical: build simple games that can be served accross multiple platforms. Several months ago I plopped down $1100 for CS5.5 Web and am wading into Flash. Bummer. My question to people who actually develop simple games and apps: What platform should I use instead? Is Flash still a sensible platform for web-served PC users? For example, let's say I build a simple arcade game that I would like to serve as an app to mobile users and as a browser-based game to PC users. Should I still invest the time and effort to learn and develop in Flash for the PC users, while building a parallel code set in some other language for mobile users? My games are simple enough that it would be annoying but not inconceivable to maintain parallel code sets.

    Read the article

  • What build tools do not depend on java (or Ruby)?

    - by Mohamed Meligy
    I'm wondering what generic build tools out there include their binary run-times and do not depend on another environment not shipped with them. For example, ANT requires Java, Rake requires Ruby, etc.. would be great if talking about also target-platform-agnostic tools, where I'd just give whatever command for building, whatever command for testing, etc.. and can then define my artifacts in CI or so. Would see something like that useful for building .NET projects (say, on both Windows .NET and Mono), and Node JS projects especially. I do not want to install Java and / or Ruby if what I want is a .NET build or a Node JS build. This is a bit of general awareness question not an exact problem I'm facing, that's why it's here not on StackOverflow. Update: To explain a bit more, what I'm after is the build script that would run MSBuild for compiling for example ( in .NET, and then maybe several Node/NPM commands in Node, etc..), and then have the rest build/test steps, instead of setting these all in MSBuild (again, in .NET case, also, wondering if there is equivalent story in Node).

    Read the article

  • eSTEP TechCast - December 2011 - Solaris 11 - The First Cloud OS

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 1st December and would be happy if you could join. Please see below the details for the next TechCast. Date and time: Thursday, 01. December 2011, 11:00 - 12:00 GMT (12:00 - 13:00 CET) Abstract: Solaris 11 contains many new features, particularly around improved virtualisation and network performance. Additionally, new software packaging for fool-proof upgrades, higher availability and reduced maintenance windows replace the former SRV4 packaging and upgrade/patching methods. Target audience: Tech Presales Speaker: Andrew Gabriel Call Info: Call-in-toll-free number: 08006948154 (United Kingdom) Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3 Security Passcode: 9876 Webex Info (Oracle Web Conference) Meeting Number: 597 686 322Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • Stream.CopyTo() extension method

    - by DigiMortal
    In one of my applications I needed copy data from one stream to another. After playing with streams a little bit I wrote CopyTo() extension method to Stream class you can use to copy the contents of current stream to target stream. Here is my extension method. It is my working draft and it is possible that there must be some more checks before we can say this extension method is ready to be part of some API or class library. public static void CopyTo(this Stream fromStream, Stream toStream) {     if (fromStream == null)         throw new ArgumentNullException("fromStream");     if (toStream == null)         throw new ArgumentNullException("toStream");       var bytes = new byte[8092];     int dataRead;     while ((dataRead = fromStream.Read(bytes, 0, bytes.Length)) > 0)         toStream.Write(bytes, 0, dataRead); } And here is example how to use this extension method. using(var stream = response.GetResponseStream()) using(var ms = new MemoryStream()) {     stream.CopyTo(ms);       // Do something with copied data } I am using this code to copy data from HTTP response stream to memory stream because I have to use serializer that needs more than response stream is able to offer.

    Read the article

  • BizTalk 2009 - Pipeline Component Wizard Error

    - by Stuart Brierley
    When attempting to run the BizTalk Server Pipeline Component Wizard for the first time I encountered an error that prevented the creation of the pipeline component project: System.ArgumentException : Value does not fall withing the expected range  I found the solution for this error in a couple of places, first a reference to the issue on the Codeplex Project and then a fuller description on a blog referring to the BizTalk 2006 implementation. To resolve this issue you need to make a change to the downloaded wizard solution code, by changing the file BizTalkPipelineComponentWizard.cs around line 304 From // get a handle to the project. using mySolution will not work. pipelineComponentProject = this._Application.Solution.Projects.Item(Path.Combine(Path.GetFileNameWithoutExtension(projectFileName), projectFileName)); To // get a handle to the project. using mySolution will not work. if (this._Application.Solution.Projects.Count == 1) pipelineComponentProject = this._Application.Solution.Projects.Item(1); else { // In Doubt: Which project is the target? pipelineComponentProject = this._Application.Solution.Projects.Item(projectFileName); } Following this you need to uninstall the the wizard, rebuild the solution and re-install the wizard again.

    Read the article

  • Compiling NETGEAR WNR2000v3 firmware under 12.04.1 LTS

    - by Madmanguruman
    I'm trying to compile the GPL NETGEAR firmware for the WNR2000v3 router using Ubuntu 12.04.1 LTS. I've downloaded and extracted the source, installed everything I think I need, yet there are two build dependencies that I cannot resolve: > make menuconfig .... Build dependency: Please install ncurses. (Missing libncurses.so or ncurses.h) Build dependency: Please install zlib. (Missing libz.so or zlib.h) I've tried apt-getting all of the following packages: libncurses5 libncurses5:i386 libncurses5-dev zlib1g zlib1g-dev zlib1g-dbg The missing libs seem 'real': the target .so files aren't in /usr/lib, although the .h files are in /usr/include. EDIT: some Googling told me of broken or missing symlinks. I found the following: /lib/i386-linux-gnu/libncurses.so.5 -> libncurses.so.5.9 /lib/i386-linux-gnu/libncurses.so.5.9 /lib/i386-linux-gnu/libncursesw.so.5 -> libncursesw.so.5.9 /lib/i386-linux-gnu/libncursesw.so.5.9 /lib/i386-linux-gnu/libz.so.1 -> libz.so.1.2.3.4 /lib/i386-linux-gnu/libz.so.1.2.3.4 so I tried to symlink these to /usr/lib: /usr/lib/libncurses.so -> /lib/i386-linux-gnu/libncurses.so.5 /usr/lib/libz.so -> /lib/i386-linux-gnu/libz.so.1 but the project still complains about missing libncurses and zlib. EDIT: I was able to get the dependencies to work under 8.04. Need to cross-reference things now. Does anyone have some tips on how to debug this sort of issue?

    Read the article

  • How to Reach German Developers: dotnet Cologne 2011

    - by WeigeltRo
    If you want to promote tools, technologies, libraries, trainings or anything else of interest to software developers, you want to reach the right audience. Not the 9-to-5 people, but those who have the knowledge and passion that make them important multipliers. A great way to reach these people are community conferences. They are not the kind of conference that the 9-to-5 folks are “sent to” by their company, but that the right people hear of via Twitter, Facebook, blogs or plain old word-of-mouth and choose to go to, often covering the costs for the day themselves (travel, entrance fee, hotel, taking the day off). If you want to reach German developers there is one conference that has emerged as the large .NET community conference in Germany, quickly growing beyond being just a local event: The dotnet Cologne, that will be held for the third time on May 6, 2011 and that I’m co-organizing (this interview gives you a good idea of the history). This year’s dotnet Cologne 2010 with its 300 attendees was a huge success. As in the year before, the conference was sold out weeks in advance, and feedback by attendees and sponsors was positive throughout. And the list of speakers and attendees sounded like a “who is who” of the German .NET community. Whether you‘d like to present a product, a service or your company: you will meet the right target audience at dotnet Cologne. We’re offering a broad variety of sponsorship opportunities, ranging from being a donor for the large raffle at the end of the day (software licenses, books, training vouchers, etc.) up to having a booth and/or giving a sponsored talk about your product (not necessarily in German, English is not a problem). Among the various sponsorship levels (bronze/silver/gold/platinum) there’s most likely a package that will suit your needs – and if not, we’re open for suggestions. We’re happy to announce that already at this point in time (with over five months to go) we have a steadily growing list of partners: Microsoft, Intel, IDesign, SubMain, Comma Soft AG, GFU Köln, and EC Software. If you want to become a sponsor for the dotnet Cologne 2011, drop me a line at Roland.Weigelt at dotnet-koelnbonn.de and I’ll send you our sponsor info.

    Read the article

  • Functional Languages that compile to Android's Dalvik VM?

    - by Berin Loritsch
    I have a software problem that fits the functional approach to programming, but the target market will be on the Android OS. I ask because there are functional languages that compile to Java's VM, but Dalvik bytecode != Java bytecode. Alternatively, do you know if the dx utility can intelligently convert the .class files generated from functional languages like Scala? Edit: In order to add a bit more helpfulness to the community, and also to help me choose better, can I refine the question a bit? Have you used any alternate languages with Dalvik? Which ones? What are some "gotchas" (problems) that I might run into? Is performance acceptable? By that, I mean the application still feels responsive to the user. I've never done mobile phone development, but I grew up on constrained devices and I'm under no illusion that there is a cost to using non-standard languages with the platform. I just need to know if the cost is such that I should shoe-horn my approach into default language (i.e. apply functional principles in the OOP language).

    Read the article

  • Games Localization: Cultural Points

    - by ultan o'broin
    Great article about localization considerations, this times in the games space. Well worth checking out. It's rare to see such all-encompassing articles about localization considerations aimed at designers. That's a shame. The industry assumes all these things are known. The evidence from practice is that they're not and also need constant reinforcement. We're not in the games space in enterprise applications yet. However, there may be a role for them in the training space but also in CRM, building relationships and contacts. Beyond the obvious considerations, check out the cultural aspects of games localization too. For example, Zygna's offerings, which you might have played on Facebook: Zynga, which can lay claim to the two most popular social games on Facebook - FarmVille and CityVille - has recently localized both games for international audiences, and while CityVille has seen only localization for European languages, FarmVille has been localized for China, which involved rebuilding the game from the ground up. This localization process involved taking into account cultural considerations including changing the color palette to be brighter and increasing the size of the farm plots, to appeal to Chinese aesthetics and cultural experience. All the more reason to conduct research in your target markets, worldwide.

    Read the article

  • Duplicating someone's content legitimately & writing HTML to support that

    - by Codecraft
    I want to add content from other blogs to my own (with the authors permission) to help build additional relevant content and support articles I've found useful that others have written. I'm looking into how to do this responsibly - ie, by giving the original content author a boost and not competing against them for search traffic which should go to their site. In order to keep my duplicate content out of search, and to hint to the search engines where the original content is to be found i've implemented: <head> <meta name='robots' content='noindex, follow'> <link rel='canonical' href='http://www.originalblog.com/original-post.html' /> </head> Additionally, to boost the original article and to let readers know where it came from i'll be adding something like this: <div> Article originally written by <a href='http://www.authorswebsite.com'>Authors Name</a> and reproduced with permission.<br/> <a href='http://www.originalblog.com/original-post.html' target='new'> Read the original article here. </a> </div> All that remains is a way to 'officially' credit the original author in the HTML for the search spiders to see. Can anyone tell me a way to do this possibly using rel="author" (as far as I can see thats only good for my own original content), or perhaps it doesn't matter given that the reproduced pages will be kept out of search engines? Also, have I overlooked anything in the approach?

    Read the article

  • eSTEP TechCast - November 2013

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 7th of November and would be happy if you could join. Please see below the details for the next TechCast.Date and time:Thursday, 07. November 2013, 11:00 - 12:00 GMT (12:00 - 13:00 CET; 15:00 - 16:00 GST) Title: The Operational Management benefits of Engineered Systems Abstract:Oracle Engineered Systems require significantly less administration effort than traditional platforms. This presentation will explain why this is the case, how much can be saved and discusses the best practices recommended to maximise Engineered Systems operational efficiency. Target audience: Tech Presales Speaker: Julian Lane Call Info:Call-in-toll-free number: 08006948154 (United Kingdom)Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3Security Passcode: 9876Webex Info (Oracle Web Conference) Meeting Number: 599 156 244Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • Resolution independence - resize on the fly or ship all sizes?

    - by RecursiveCall
    My game relies heavily on textures of various sizes with some being full-screen. The game is targeted for multiple resolutions. I found that resizing textures (downsizing) works quite well for this game’s art type (it’s not Pixel Art or anything like that). I asked my artist to ensure that all textures at the edges of the screen to be created in such a way that they can safely “overflow” off screen; this means that aspect ratio is not an issue. So with no aspect ratio issues, I figured that I would simply ask my artist to create assets in very high resolution, and then resize them down to the appropriate screen resolution. The question is, when and how do I do that? Do I pre-resize everything to common resolutions in Photoshop and package all assets in the final product (increasing the size download that the user has to deal with) and then select the appropriate asset based on the detected resolution? Or do I ship with the largest set of Textures, detect the resolution on load, set a render target and draw all downsized assets to it and use that? Or for the latter, do I use some sort of a CPU-sided algorithm to resize on game load?

    Read the article

< Previous Page | 152 153 154 155 156 157 158 159 160 161 162 163  | Next Page >