Search Results

Search found 351 results on 15 pages for 'theoretical'.

Page 14/15 | < Previous Page | 10 11 12 13 14 15  | Next Page >

  • Computer science undergraduate project ideas

    - by Mehrdad Afshari
    Hopefully, I'm going to finish my undergraduate studies next semester and I'm thinking about the topic of my final project. And yes, I've read the questions with duplicate title. I'm asking this from a bit different viewpoint, so it's not an exact dupe. I've spent at least half of my life coding stuff in different languages and frameworks so I'm not looking at this project as a way to learn much about coding and preparing for real world apps or such. I've done lots of those already. But since I have to do it to complete my degree, I felt I should spend my time doing something useful instead of throwing the whole thing out. I'm planning to make it an open source project or a hosted Web app (depending on the type) if I can make a high quality thing out of it, so I decided to ask StackOverflow what could make a useful project. Situation I've plenty of freedom about the topic. They also require 30-40 pages of text describing the project. I have the following points in mind (the more satisfied, the better): Something useful for software development Something that benefits the community Having academic value is great Shouldn't take more than a month of development (I know I'm lazy). Shouldn't be related to advanced theoretical stuff (soft computing, fuzzy logic, neural networks, ...). I've been a business-oriented software developer. It should be software oriented. While I love hacking microcontrollers and other fun embedded electronic things, I'm not really good at soldering and things like that. I'm leaning toward a Web application (think StackOverflow, PasteBin, NerdDinner, things like those). Technology It's probably going to be done in .NET (C#, F#) and Windows platform. If I really like the project (cool low level hacking), I might actually slip to C/C++. But really, C# is what I'm efficient at. Ideas Programming language, parsing and compiler related stuff: Designing a domain specific programming language and compiler Templating language compiled to C# or IL Database tools and related code generation stuff Web related technologies: ASP.NET MVC View engine doing something cool (don't know what exactly...) Specific-purpose, small, fast ASP.NET-based Web framework Applications: Visual Studio plugin to integrate with Bazaar (it's too much work, I think). ASP.NET based, jQuery-powered issue tracker (and possibly, project lifecycle management as a whole - poor man's TFS) Others: Something related to GPGPU Looking forward for great ideas! Unfortunately, I can't help on a currently existing project. I need to start my own to prevent further problems (as it's an undergrad project, nevertheless).

    Read the article

  • Data validation best practices: how can I better construct user feedback?

    - by Cory Larson
    Data validation, whether it be domain object, form, or any other type of input validation, could theoretically be part of any development effort, no matter its size or complexity. I sometimes find myself writing informational or error messages that might seem harsh or demanding to unsuspecting users, and frankly I feel like there must be a better way to describe the validation problem to the user. I know that this topic is subjective and argumentative. StackOverflow might not be the proper channel for diving into this subject, but like I've mentioned, we all run into this at some point or another. There are so many StackExchange sites now; if there is a better one, feel free to share! Basically, I'm looking for good resources on data validation and user feedback that results from it at a theoretical level. Topics and questions I'm interested in are: Content Should I be describing what the user did correctly or incorrectly, or simply what was expected? How much detail can the user read before they get annoyed? (e.g. Is "Username cannot exceed 20 characters." enough, or should it be described more fully, such as "The username cannot be empty, and must be at least 6 characters but cannot exceed 30 characters."?) Grammar How do I decide between phrases like "must not," "may not," or "cannot"? Delivery This can depend on the project, but how should the information be delivered to the user? Should it be obtrusive (e.g. JavaScript alerts) or friendly? Should they be displayed prominently? Immediately (i.e. without confirmation steps, etc.)? Logging Do you bother logging validation errors? Internationalization Some cultures prefer or better understand directness over subtlety and vice-versa (e.g. "Don't do that!" vs. "Please check what you've done."). How do I cater to the majority of users? I may edit this list as I think more about the topic, but I'm genuinely interest in proper user feedback techniques. I'm looking for things like research results, poll results, etc. I've developed and refined my own techniques over the years that users seem to be okay with, but I work in an environment where the users prefer to adapt to what you give them over speaking up about things they don't like. I'm interested in hearing your experiences in addition to any resources to which you may be able to point me.

    Read the article

  • Is this asking too much of a browser?

    - by Matt Ball
    I'm embedding a large array in <script> tags in my HTML, like this (nothing surprising): <script> var largeArray = [/* lots of stuff in here */]; </script> In this particular example, the array has 210,000 elements. That's well below the theoretical maximum of 231 - by 4 orders of magnitude. Here's the fun part: if I save JS source for the array to a file, that file is 44 megabytes (46,573,399 bytes, to be exact). If you want to see for yourself, you can download it from my Dropbox. (All the data in there is canned, so much of it is repeated. This will not be the case in production.) Now, I'm really not concerned about serving that much data. My server gzips its responses, so it really doesn't take all that long to get the data over the wire. However, there is a really nasty tendency for the page, once loaded, to crash the browser. I'm not testing at all in IE (this is an internal tool). My primary targets are Chrome 8 and Firefox 3.6. In Firefox, I can see a reasonably useful error in the console: Error: script stack space quota is exhausted In Chrome, I simply get the sad-tab page: Cut to the chase, already Is this really too much data for our modern, "high-performance" browsers to handle? Is there anything I can do* to gracefully handle this much data? Incidentally, I was able to get this to work (read: not crash the tab) on-and-off in Chrome. I really thought that Chrome, at least, was made of tougher stuff, but apparently I was wrong... Edit 1 @Crayon: I wasn't looking to justify why I'd like to dump this much data into the browser at once. Short version: either I solve this one (admittedly not-that-easy) problem, or I have to solve a whole slew of other problems. I'm opting for the simpler approach for now. @various: right now, I'm not especially looking for ways to actually reduce the number of elements in the array. I know I could implement Ajax paging or what-have-you, but that introduces its own set of problems for me in other regards. @Phrogz: each element looks something like this: {dateTime:new Date(1296176400000), terminalId:'terminal999', 'General___BuildVersion':'10.05a_V110119_Beta', 'SSM___ExtId':26680, 'MD_CDMA_NETLOADER_NO_BCAST___Valid':'false', 'MD_CDMA_NETLOADER_NO_BCAST___PngAttempt':0} @Will: but I have a computer with a 4-core processor, 6 gigabytes of RAM, over half a terabyte of disk space ...and I'm not even asking for the browser to do this quickly - I'm just asking for it to work at all! ? *other than the obvious: sending less data to the browser

    Read the article

  • Personal Project - Next practical language/tech to learn

    - by Paul Nathan
    I'm working on a personal project doing some finance analysis. It's a totally new field for me, and I'm really having fun with it so far, plus working in the high-level language arena is a great break from my embedded systems daytime work. I have a MySQL backend on a non-local server with a pile of stock data. My task now is to do some analysis of the stocks and produce something approximating a useful result. There are a couple technical difficulties. (1) I have a lot of records. To be precise, I believe I'm near 100K records right now, and this number grows by 6.1K each weekday. I need to create a way to rummage through these fields and do data analysis - based on a given computation, go look at this other set. Fine and dandy, nothing too outre. But this means I could really use a straightforward API for talking to MySQL. (2) Ideally, it runs on OS X 10.4.11. No Windows/Linux machine at home. (3) I can use PHP, C++, Perl, etc. I even have an R installation. I'm pretty flexible with stuff, so long as it runs on OS X. (Lots of options here, pick water, H20, or dihydrogen monoxide ;-) ) (4)Lack of hassle. While I like clever and fun ways of doing things, I'm trying to get some analysis done, not spend ten hours doing installation work and scratching my head figuring out a theoretical syntax question needed to spout out "hello world". What's the question? I'd like to dig into something different than my usual PHP/C++/C toolset. I'm looking for recommendations for languages/technologies that will assist me and meet the above requirements. In particular, I've heard a lot of buzz about F# and Python on SO. I've used CLISP for small problems before, and kinda liked it. I'm seeking opinions about those in particular. edit:since I rent the DB server and have a limited amount of CPU time online, I'm trying to do the analysis on a local machine.

    Read the article

  • PHP - static DB class vs DB singleton object

    - by Marco Demaio
    I don't want to create a discussion about singleton better than static or better than global, etc. I read dozens of questions about it on SO, but I couldn't come up with an answer to this SPECIFIC question, so I hope someone could now illuminate me buy answering this question with one (or more) real simple EXAMPLES, and not theoretical discussions. In my app I have the typical DB class needed to perform tasks on DB without having to write everywhere in code mysql_connect/mysql_select_db/mysql... (moreover in future I might decide to use another type of DB engine in place of mySQL so obviously I need a class of abstration). I could write the class either as a static class: class DB { private static $connection = FALSE; //connection to be opened //DB connection values private static $server = NULL; private static $usr = NULL; private static $psw = NULL; private static $name = NULL; public static function init($db_server, $db_usr, $db_psw, $db_name) { //simply stores connections values, withour opening connection } public static function query($query_string) { //performs query over alerady opened connection, if not open, it opens connection 1st } ... } or as a Singletonm class: class DBSingleton { private $inst = NULL; private $connection = FALSE; //connection to be opened //DB connection values private $server = NULL; private $usr = NULL; private $psw = NULL; private $name = NULL; public static function getInstance($db_server, $db_usr, $db_psw, $db_name) { //simply stores connections values, withour opening connection if($inst === NULL) $this->inst = new DBSingleton(); return $this->inst; } private __construct()... public function query($query_string) { //performs query over already opened connection, if connection is not open, it opens connection 1st } ... } Then after in my app if I wanto to query the DB i could do //Performing query using static DB object DB:init(HOST, USR, PSW, DB_NAME); DB::query("SELECT..."); //Performing query using DB singleton $temp = DBSingleton::getInstance(HOST, USR, PSW, DB_NAME); $temp->query("SELECT..."); My simple brain sees Singleton has got the only advantage to avoid declaring as 'static' each method of the class. I'm sure some of you could give me an EXAMPLE of real advantage of singleton in this specific case. Thanks in advance.

    Read the article

  • Why is it still so hard to write software?

    - by nornagon
    Writing software, I find, is composed of two parts: the Idea, and the Implementation. The Idea is about thinking: "I have this problem; how do I solve it?" and further, "how do I solve it elegantly?" The answers to these questions are obtainable by thinking about algorithms and architecture. The ideas come partially through analysis and partially through insight and intuition. The Idea is usually the easy part. You talk to your friends and co-workers and you nut it out in a meeting or over coffee. It takes an hour or two, plus revisions as you implement and find new problems. The Implementation phase of software development is so difficult that we joke about it. "Oh," we say, "the rest is a Simple Matter of Code." Because it should be simple, but it never is. We used to write our code on punch cards, and that was hard: mistakes were very difficult to spot, so we had to spend extra effort making sure every line was perfect. Then we had serial terminals: we could see all our code at once, search through it, organise it hierarchically and create things abstracted from raw machine code. First we had assemblers, one level up from machine code. Mnemonics freed us from remembering the machine code. Then we had compilers, which freed us from remembering the instructions. We had virtual machines, which let us step away from machine-specific details. And now we have advanced tools like Eclipse and Xcode that perform analysis on our code to help us write code faster and avoid common pitfalls. But writing code is still hard. Writing code is about understanding large, complex systems, and tools we have today simply don't go very far to help us with that. When I click "find all references" in Eclipse, I get a list of them at the bottom of the window. I click on one, and I'm torn away from what I was looking at, forced to context switch. Java architecture is usually several levels deep, so I have to switch and switch and switch until I find what I'm really looking for -- by which time I've forgotten where I came from. And I do that all day until I've understood a system. It's taxing mentally, and Eclipse doesn't do much that couldn't be done in 1985 with grep, except eat hundreds of megs of RAM. Writing code has barely changed since we were staring at amber on black. We have the theoretical groundwork for much more advanced tools, tools that actually work to help us comprehend and extend the complex systems we work with every day. So why is writing code still so hard?

    Read the article

  • Django admin fails when using includes in urlpatterns

    - by zenWeasel
    I am trying to refactor out my application a little bit to keep it from getting too unwieldily. So I started to move some of the urlpatterns out to sub files as the documentation proposes. Besides that fact that it just doesn't seem to be working (the items are not being rerouted) but when I go to the admin, it says that 'urlpatterns has not been defined'. The urls.py I have at the root of my application is: if settings.ENABLE_SSL: urlpatterns = patterns('', (r'^checkout/orderform/onepage/(\w*)/$','checkout.views.one_page_orderform',{'SSL':True},'commerce.checkout.views.single_product_orderform'), ) else: urlpatterns = patterns('', (r'^checkout/orderform/onepage/(\w*)/$','commerce.checkout.views.single_product_orderform'), ) urlpatterns+= patterns('', (r'^$', 'alchemysites.views.route_to_home'), (r'^%s/' % settings.DAJAXICE_MEDIA_PREFIX, include('dajaxice.urls')), (r'^/checkout/', include('commerce.urls')), (r'^/offers',include('commerce.urls')), (r'^/order/',include('commerce.urls')), (r'^admin/', include(admin.site.urls)), (r'^accounts/login/$', login), (r'^accounts/logout/$', logout), (r'^(?P<path>.*)/$','alchemysites.views.get_path'), (r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root':settings.MEDIA_ROOT}), The urls I have moved out so far are the checkout/offers/order which are all subapps of 'commerce' where the urls.py for the apps are so to be clear. /urls.py in questions (included here) /commerce/urls.py where the urls.py I want to include is: order_info = { 'queryset': Order.objects.all(), } urlpatterns+= patterns('', (r'^offers/$','offers.views.start_offers'), (r'^offers/([a-zA-Z0-9-]*)/order/(\d*)/add/([a-zA-Z0-9-]*)/(\w*)/next/([a-zA-Z0-9-)/$','offers.views.show_offer'), (r'^reports/orders/$', list_detail.object_list,order_info), ) and the applications offers lies under commerce. And so the additional problem is that admin will not work at all, so I'm thinking because I killed it somewhere with my includes. Things I have checked for: Is the urlpatterns variable accidentally getting reset somewhere (i.e. urlpatterns = patterns, instead of urlpatterns+= patterns) Are the patterns in commerce.urls valid (yes, when moved back to root they work). So from there I am stumped. I can move everything back into the root, but was trying to get a little decoupled, not just for theoretical reason but for some short terms ones. Lastly if I enter www.domainname/checkout/orderform/onepage/xxxjsd I get the correct page. However, entering www.domainname/checkout/ gets handled by the alchemysites.views.get_path. If not the answer (because this is pretty darn specific), then is there a good way for troubleshoot urls.py? It seems to just be trial and error. Seems there should be some sort of parser that will tell you what your urlpatterns will do.

    Read the article

  • How to catch 'exceptions' for out of order execution in Workflow Foundation 4?

    - by Alex Key
    Hi, I am attempting to model a worklfow using a "WCF Workflow Service" in .net / vs 2010 that needs to handle out of order execution gracefully (but not allow it - if thath makes sense!?) For example I have 2 receive activities one called Initialize and the other called GetValue inside a FlowChart. In most cases Initialize should be called first and GetValue after (as modled in the flow chart). However if GetValue is executed before Initialize I do not want to return an "out of order" exception (although when I look at the WCF test client, I can't actually see an exception). But instead a custom exception saying something like "you must initialize first". In theory I could model this with lots of parallel activities and conditions to check if Initialized / Running / Terminated etc. But the business process I am modelling if very very similar to a state machine... except it must handle people executing things in the wrong order. Ideally I would like to catch the "out of order" exception (thought I don't think it's really an exception as such), check the 'exception' to see which function was attempted to run and then handle it. I have done some research around enabling AllowBufferedReceive. However I don't want to be able to execute out of order (I don't think), but instead give a detailed response if it does happen. I've looked at the new beta state machine template for WF 4 - but i'm not sure if it does what i'm after? I'm not sure if I have the wrong end of the stick, so any help would be greatly appreciated. [EDIT] To help clarify... Sorry it's a tricky one to explain. The standard I am trying to implement (the e-learning standard SCORM RTE) is structured like a state machine i.e. certain functions can only be executed in certain states. However the standard specifies that if the calling clients tries to execute a function that it is not meant to, then a warning should be issued... for example "you cannot use GetValue(), because you have not yet Initialized". Ideally I'd like to structure the workflow as the theoretical state machine and not need to have to use multiple if/else's to handle all the scenarios where something could be executed out-of-order. I'd like to catch a out-of-order exception (but I don't think there is such an exception - as it's not in the debugger) and rethrow it.

    Read the article

  • Make a compiled binary run at native speed flawlessly without recompiling from source on a another system?

    - by unknownthreat
    I know that many people, at a first glance of the question, may immediately yell out "Java", but no, I know Java's qualities. Allow me to elaborate my question first. Normally, when we want our program to run at a native speed on a system, whether it be Windows, Mac OS X, or Linux, we need to compile from source codes. If you want to run a program of another system in your system, you need to use a virtual machine or an emulator. While these tools allow you to use the program you need on the non-native OS, they sometimes have problems of performance and glitches. We also have a newer compiler called "JIT Compiler", where the compiler will parse the bytecode program to native machine language before execution. The performance may increase to a very good extent with JIT Compiler, but the performance is still not the same as running it on a native system. Another program on Linux, WINE, is also a good tool for running Windows program on Linux system. I have tried running Team Fortress 2 on it, and tried experiment with some settings. I got ~40 fps on Windows at its mid-high setting on 1280 x 1024. On Linux, I need to turn everything low at 1280 x 1024 to get ~40 fps. There are 2 notable things though: Polygon model settings do not seem to affect framerate whether I set it low or high. When there are post-processing effects or some special effects that require manipulation of drawn pixels of the current frame, the framerate will drop to 10-20 fps. From this point, I can see that normal polygon rendering is just fine, but when it comes to newer rendering methods that requires graphic card to the job, it slows down to a crawl. Anyway, this question is rather theoretical. Is there anything we can do at all? I see that WINE can run STEAM and Team Fortress 2. Although there are flaws, they can run at lower setting. Or perhaps, I should also ask, "is it possible to translate one whole program on a system to another system without recompiling from source and get native speed?" I see that we also have AOT Compiler, is it possible to use it for something like this? Or there are so many constraints (such as DirectX call or differences in software architecture) that make it impossible to have a flawless and not native to the system program that runs at native speed?

    Read the article

  • Pruning data for better viewing on loglog graph - Matlab

    - by Geodesic
    Hi Guys, just wondering if anyone has any ideas about an issue I'm having. I have a fair amount of data that needs to be displayed on one graph. Two theoretical lines that are bold and solid are displayed on top, then 10 experimental data sets that converge to these lines are graphed, each using a different identifier (eg the + or o or a square etc). These graphs are on a log scale that goes up to 1e6. The first few decades of the graph (< 1e3) look fine, but as all the datasets converge ( 1e3) it's really difficult to see what data is what. There's over 1000 data points points per decade which I can prune linearly to an extent, but if I do this too much the lower end of the graph will suffer in resolution. What I'd like to do is prune logarithmically, strongest at the high end, working back to 0. My question is: how can I get a logarithmically scaled index vector rather than a linear one? My initial assumption was that as my data is lenear I could just use a linear index to prune, which lead to something like this (but for all decades): //%grab indicies per decade ind12 = find(y >= 1e1 & y <= 1e2); indlow = find(y < 1e2); indhigh = find(y > 1e4); ind23 = find(y >+ 1e2 & y <= 1e3); ind34 = find(y >+ 1e3 & y <= 1e4); //%We want ind12 indexes in this decade, find spacing tot23 = round(length(ind23)/length(ind12)); tot34 = round(length(ind34)/length(ind12)); //%grab ones to keep ind23keep = ind23(1):tot23:ind23(end); ind34keep = ind34(1):tot34:ind34(end); indnew = [indlow' ind23keep ind34keep indhigh']; loglog(x(indnew), y(indnew)); But this causes the prune to behave in a jumpy fashion obviously. Each decade has the number of points that I'd like, but as it's a linear distribution, the points tend to be clumped at the high end of the decade on the log scale. Any ideas on how I can do this?

    Read the article

  • Efficient list compacting

    - by Patrik
    Suppose you have a list of unsigned ints. Suppose some elements are equal to 0 and you want to push them back. Currently I use this code (list is a pointer to a list of unsigned ints of size n for (i = 0; i < n; ++i) { if (list[i]) continue; int j; for (j = i + 1; j < n && !list[j]; ++j); int z; for (z = j + 1; z < n && list[z]; ++z); if (j == n) break; memmove(&(list[i]), &(list[j]), sizeof(unsigned int) * (z - j))); int s = z - j + i; for(j = s; j < z; ++j) list[j] = 0; i = s - 1; } Can you think of a more efficient way to perform this task? The snippet is purely theoretical, in the production code, each element of list is a 64 bytes struct EDIT: I'll post my solution. Many thanks to Jonathan Leffler. void RemoveDeadParticles(int * list, int * n) { int i, j = *n - 1; for (; j >= 0 && list[j] == 0; --j); for (i = 0; i < j; ++i) { if (list[i]) continue; memcpy(&(list[i]), &(list[j]), sizeof(int)); list[j] = 0; for (; j >= 0 && list[j] == 0; --j); if (i == j) break; } *n = i + 1; }

    Read the article

  • Is there a way to efficiently yield every file in a directory containing millions of files?

    - by Josh Smeaton
    I'm aware of os.listdir, but as far as I can gather, that gets all the filenames in a directory into memory, and then returns the list. What I want, is a way to yield a filename, work on it, and then yield the next one, without reading them all into memory. Is there any way to do this? I worry about the case where filenames change, new files are added, and files are deleted using such a method. Some iterators prevent you from modifying the collection during iteration, essentially by taking a snapshot of the state of the collection at the beginning, and comparing that state on each move operation. If there is an iterator capable of yielding filenames from a path, does it raise an error if there are filesystem changes (add, remove, rename files within the iterated directory) which modify the collection? There could potentially be a few cases that could cause the iterator to fail, and it all depends on how the iterator maintains state. Using S.Lotts example: filea.txt fileb.txt filec.txt Iterator yields filea.txt. During processing, filea.txt is renamed to filey.txt and fileb.txt is renamed to filez.txt. When the iterator attempts to get the next file, if it were to use the filename filea.txt to find it's current position in order to find the next file and filea.txt is not there, what would happen? It may not be able to recover it's position in the collection. Similarly, if the iterator were to fetch fileb.txt when yielding filea.txt, it could look up the position of fileb.txt, fail, and produce an error. If the iterator instead was able to somehow maintain an index dir.get_file(0), then maintaining positional state would not be affected, but some files could be missed, as their indexes could be moved to an index 'behind' the iterator. This is all theoretical of course, since there appears to be no built-in (python) way of iterating over the files in a directory. There are some great answers below, however, that solve the problem by using queues and notifications. Edit: The OS of concern is Redhat. My use case is this: Process A is continuously writing files to a storage location. Process B (the one I'm writing), will be iterating over these files, doing some processing based on the filename, and moving the files to another location. Edit: Definition of valid: Adjective 1. Well grounded or justifiable, pertinent. (Sorry S.Lott, I couldn't resist). I've edited the paragraph in question above.

    Read the article

  • Another Marketing Conference, part two – the afternoon

    - by Roger Hart
    In my previous post, I’ve covered the morning sessions at AMC2012. Here’s the rest of the write-up. I’ve skipped Charles Nixon’s session which was a blend of funky futurism and professional development advice, but you can see his slides here. I’ve also skipped the Google presentation, as it was a little thin on insight. 6 – Brand ambassadors: Getting universal buy in across the organisation, Vanessa Northam Slides are here This was the strongest enforcement of the idea that brand and campaign values need to be delivered throughout the organization if they’re going to work. Vanessa runs internal communications at e-on, and shared her experience of using internal comms to align an organization and thereby get the most out of a campaign. She views the purpose of internal comms as: “…to help leaders, to communicate the purpose and future of an organization, and support change.” This (and culture) primes front line staff, which creates customer experience and spreads brand. You ensure a whole organization knows what’s going on with both internal and external comms. If everybody is aligned and informed, if everybody can clearly articulate your brand and campaign goals, then you can turn everybody into an advocate. Alignment is a powerful tool for delivering a consistent experience and message. The pathological counter example is the one in which a marketing message goes out, which creates inbound customer contacts that front line contact staff haven’t been briefed to handle. The NatWest campaign was again mentioned in this context. The good example was e-on’s cheaper tariff campaign. Building a groundswell of internal excitement, and even running an internal launch meant everyone could contribute to a good customer experience. They found that meter readers were excited – not a group they’d considered as obvious in providing customer experience. But they were a group that has a lot of face-to-face contact with customers, and often were asked questions they may not have been briefed to answer. Being able to communicate a simple new message made it easier for them, and also let them become a sales and marketing asset to the organization. 7 – Goodbye Internet, Hello Outernet: the rise and rise of augmented reality, Matt Mills I wasn’t going to write this up, because it was essentially a sales demo for Aurasma. But the technology does merit some discussion. Basically, it replaces QR codes with visual recognition, and provides a simple-looking back end for attaching content. It’s quite sexy. But here’s my beef with it: QR codes had a clear visual language – when you saw one you knew what it was and what to do with it. They were clunky, but they had the “getting started” problem solved out of the box once you knew what you were looking at. However, they fail because QR code reading isn’t native to the platform. You needed an app, which meant you needed to know to download one. Consequentially, you can’t use QR codes with and ubiquity, or depend on them. This means marketers, content providers, etc, never pushed them, and they remained and awkward oddity, a minority sport. Aurasma half solves problem two, and re-introduces problem one, making it potentially half as useful as a QR code. It’s free, and you can apparently build it into your own apps. Add to that the likelihood of it becoming native to the platform if it takes off, and it may have legs. I guess we’ll see. 8 – We all need to code, Helen Mayor Great title – good point. If there was anybody in the room who didn’t at least know basic HTML, and if Helen’s presentation inspired them to learn, that’s fantastic. However, this was a half hour sales pitch for a basic coding training course. Beyond advocating coding skills it contained no useful content. Marketers may also like to consider some of these resources if they’re looking to learn code: Code Academy – free interactive tutorials Treehouse – learn web design, web dev, or app dev WebPlatform.org – tutorials and documentation for web tech  11 – Understanding our inner creativity, Margaret Boden This session was the most theoretical and probably least actionable of the day. It also held my attention utterly. Margaret spoke fluently, fascinatingly, without slides, on the subject of types of creativity and how they work. It was splendid. Yes, it raised a wry smile whenever she spoke of “the content of advertisements” and gave an example from 1970s TV ads, but even without the attempt to meet the conference’s theme this would have been thoroughly engaging. There are, Margaret suggested, three types of creativity: Combinatorial creativity The most common form, and consisting of synthesising ideas from existing and familiar concepts and tropes. Exploratory creativity Less common, this involves exploring the limits and quirks of a particular constraint or style. Transformational creativity This is uncommon, and arises from finding a way to do something that the existing rules would hold to be impossible. In essence, this involves breaking one of the constraints that exploratory creativity is composed from. Combinatorial creativity, she suggested, is particularly important for attaching favourable ideas to existing things. As such is it probably worth developing for marketing. Exploratory creativity may then come into play in something like developing and optimising an idea or campaign that now has momentum. Transformational creativity exists at the edges of this exploration. She suggested that products may often be transformational, but that marketing seemed unlikely to in her experience. This made me wonder about Listerine. Crucially, transformational creativity is characterised by there being some element of continuity with the strictures of previous thinking. Once it has happened, there may be  move from a revolutionary instance into an explored style. Again, from a marketing perspective, this seems to chime well with the thinking in Youngme Moon’s book: Different Talking about the birth of Modernism is visual art, Margaret pointed out that transformational creativity has historically risked a backlash, demanding what is essentially an education of the market. This is best accomplished by referring back to the continuities with the past in order to make the new familiar. Thoughts The afternoon is harder to sum up than the morning. It felt less concrete, and was troubled by a short run of poor presentations in the middle. Mainly, I found myself wrestling with the internal comms issue. It’s one of those things that seems astonishingly obvious in hindsight, but any campaign – particularly any large one – is doomed if the people involved can’t believe in it. We’ve run things here that haven’t gone so well, of course we have; who hasn’t? I’m not going to air any laundry, but people not being informed (much less aligned) feels like a common factor. It’s tough though. Managing and anticipating information needs across an organization of any size can’t be easy. Even the simple things like ensuring sales and support departments know what’s in a product release, and what messages go with it are easy to botch. The thing I like about framing this as a brand and campaign advocacy problem is that it makes it likely to get addressed. Better is always sexier than less-worse. Any technical communicator who’s ever felt crowded out by a content strategist or marketing copywriter  knows this – increasing revenue gets a seat at the table far more readily than reducing support costs, even if the financial impact is identical. So that’s it from AMC. The big thought-provokers were social buying behaviour and eliciting behaviour change, and the value of internal communications in ensuring successful campaigns and continuity of customer experience. I’ll be chewing over that for a while, and I’d definitely return next year.      

    Read the article

  • Big Data&rsquo;s Killer App&hellip;

    - by jean-pierre.dijcks
    Recently Keith spent  some time talking about the cloud on this blog and I will spare you my thoughts on the whole thing. What I do want to write down is something about the Big Data movement and what I think is the killer app for Big Data... Where is this coming from, ok, I confess... I spent 3 days in cloud land at the Cloud Connect conference in Santa Clara and it was quite a lot of fun. One of the nice things at Cloud Connect was that there was a track dedicated to Big Data, which prompted me to some extend to write this post. What is Big Data anyways? The most valuable point made in the Big Data track was that Big Data in itself is not very cool. Doing something with Big Data is what makes all of this cool and interesting to a business user! The other good insight I got was that a lot of people think Big Data means a single gigantic monolithic system holding gazillions of bytes or documents or log files. Well turns out that most people in the Big Data track are talking about a lot of collections of smaller data sets. So rather than thinking "big = monolithic" you should be thinking "big = many data sets". This is more than just theoretical, it is actually relevant when thinking about big data and how to process it. It is important because it means that the platform that stores data will most likely consist out of multiple solutions. You may be storing logs on something like HDFS, you may store your customer information in Oracle and you may store distilled clickstream information in some distilled form in MySQL. The big question you will need to solve is not what lives where, but how to get it all together and get some value out of all that data. NoSQL and MapReduce Nope, sorry, this is not the killer app... and no I'm not saying this because my business card says Oracle and I'm therefore biased. I think language is important, but as with storage I think pragmatic is better. In other words, some questions can be answered with SQL very efficiently, others can be answered with PERL or TCL others with MR. History should teach us that anyone trying to solve a problem will use any and all tools around. For example, most data warehouses (Big Data 1.0?) get a lot of data in flat files. Everyone then runs a bunch of shell scripts to massage or verify those files and then shoves those files into the database. We've even built shell script support into external tables to allow for this. I think the Big Data projects will do the same. Some people will use MapReduce, although I would argue that things like Cascading are more interesting, some people will use Java. Some data is stored on HDFS making Cascading the way to go, some data is stored in Oracle and SQL does do a good job there. As with storage and with history, be pragmatic and use what fits and neither NoSQL nor MR will be the one and only. Also, a language, while important, does in itself not deliver business value. So while cool it is not a killer app... Vertical Behavioral Analytics This is the killer app! And you are now thinking: "what does that mean?" Let's decompose that heading. First of all, analytics. I would think you had guessed by now that this is really what I'm after, and of course you are right. But not just analytics, which has a very large scope and means many things to many people. I'm not just after Business Intelligence (analytics 1.0?) or data mining (analytics 2.0?) but I'm after something more interesting that you can only do after collecting large volumes of specific data. That all important data is about behavior. What do my customers do? More importantly why do they behave like that? If you can figure that out, you can tailor web sites, stores, products etc. to that behavior and figure out how to be successful. Today's behavior that is somewhat easily tracked is web site clicks, search patterns and all of those things that a web site or web server tracks. that is where the Big Data lives and where these patters are now emerging. Other examples however are emerging, and one of the examples used at the conference was about prediction churn for a telco based on the social network its members are a part of. That social network is not about LinkedIn or Facebook, but about who calls whom. I call you a lot, you switch provider, and I might/will switch too. And that just naturally brings me to the next word, vertical. Vertical in this context means per industry, e.g. communications or retail or government or any other vertical. The reason for being more specific than just behavioral analytics is that each industry has its own data sources, has its own quirky logic and has its own demands and priorities. Of course, the methods and some of the software will be common and some will have both retail and service industry analytics in place (your corner coffee store for example). But the gist of it all is that analytics that can predict customer behavior for a specific focused group of people in a specific industry is what makes Big Data interesting. Building a Vertical Behavioral Analysis System Well, that is going to be interesting. I have not seen much going on in that space and if I had to have some criticism on the cloud connect conference it would be the lack of concrete user cases on big data. The telco example, while a step into the vertical behavioral part is not really on big data. It used a sample of data from the customers' data warehouse. One thing I do think, and this is where I think parts of the NoSQL stuff come from, is that we will be doing this analysis where the data is. Over the past 10 years we at Oracle have called this in-database analytics. I guess we were (too) early? Now the entire market is going there including companies like SAS. In-place btw does not mean "no data movement at all", what it means that you will do this on data's permanent home. For SAS that is kind of the current problem. Most of the inputs live in a data warehouse. So why move it into SAS and back? That all worked with 1 TB data warehouses, but when we are looking at 100TB to 500 TB of distilled data... Comments? As it is still early days with these systems, I'm very interested in seeing reactions and thoughts to some of these thoughts...

    Read the article

  • What's up with LDoms: Part 2 - Creating a first, simple guest

    - by Stefan Hinker
    Welcome back! In the first part, we discussed the basic concepts of LDoms and how to configure a simple control domain.  We saw how resources were put aside for guest systems and what infrastructure we need for them.  With that, we are now ready to create a first, very simple guest domain.  In this first example, we'll keep things very simple.  Later on, we'll have a detailed look at things like sizing, IO redundancy, other types of IO as well as security. For now,let's start with this very simple guest.  It'll have one core's worth of CPU, one crypto unit, 8GB of RAM, a single boot disk and one network port.  CPU and RAM are easy.  The network port we'll create by attaching a virtual network port to the vswitch we created in the primary domain.  This is very much like plugging a cable into a computer system on one end and a network switch on the other.  For the boot disk, we'll need two things: A physical piece of storage to hold the data - this is called the backend device in LDoms speak.  And then a mapping between that storage and the guest domain, giving it access to that virtual disk.  For this example, we'll use a ZFS volume for the backend.  We'll discuss what other options there are for this and how to chose the right one in a later article.  Here we go: root@sun # ldm create mars root@sun # ldm set-vcpu 8 mars root@sun # ldm set-mau 1 mars root@sun # ldm set-memory 8g mars root@sun # zfs create rpool/guests root@sun # zfs create -V 32g rpool/guests/mars.bootdisk root@sun # ldm add-vdsdev /dev/zvol/dsk/rpool/guests/mars.bootdisk \ mars.root@primary-vds root@sun # ldm add-vdisk root mars.root@primary-vds mars root@sun # ldm add-vnet net0 switch-primary mars That's all, mars is now ready to power on.  There are just three commands between us and the OK prompt of mars:  We have to "bind" the domain, start it and connect to its console.  Binding is the process where the hypervisor actually puts all the pieces that we've configured together.  If we made a mistake, binding is where we'll be told (starting in version 2.1, a lot of sanity checking has been put into the config commands themselves, but binding will catch everything else).  Once bound, we can start (and of course later stop) the domain, which will trigger the boot process of OBP.  By default, the domain will then try to boot right away.  If we don't want that, we can set "auto-boot?" to false.  Finally, we'll use telnet to connect to the console of our newly created guest.  The output of "ldm list" shows us what port has been assigned to mars.  By default, the console service only listens on the loopback interface, so using telnet is not a large security concern here. root@sun # ldm set-variable auto-boot\?=false mars root@sun # ldm bind mars root@sun # ldm start mars root@sun # ldm list NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME primary active -n-cv- UART 8 7680M 0.5% 1d 4h 30m mars active -t---- 5000 8 8G 12% 1s root@sun # telnet localhost 5000 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. ~Connecting to console "mars" in group "mars" .... Press ~? for control options .. {0} ok banner SPARC T3-4, No Keyboard Copyright (c) 1998, 2011, Oracle and/or its affiliates. All rights reserved. OpenBoot 4.33.1, 8192 MB memory available, Serial # 87203131. Ethernet address 0:21:28:24:1b:50, Host ID: 85241b50. {0} ok We're done, mars is ready to install Solaris, preferably using AI, of course ;-)  But before we do that, let's have a little look at the OBP environment to see how our virtual devices show up here: {0} ok printenv auto-boot? auto-boot? = false {0} ok printenv boot-device boot-device = disk net {0} ok devalias root /virtual-devices@100/channel-devices@200/disk@0 net0 /virtual-devices@100/channel-devices@200/network@0 net /virtual-devices@100/channel-devices@200/network@0 disk /virtual-devices@100/channel-devices@200/disk@0 virtual-console /virtual-devices/console@1 name aliases We can see that setting the OBP variable "auto-boot?" to false with the ldm command worked.  Of course, we'd normally set this to "true" to allow Solaris to boot right away once the LDom guest is started.  The setting for "boot-device" is the default "disk net", which means OBP would try to boot off the devices pointed to by the aliases "disk" and "net" in that order, which usually means "disk" once Solaris is installed on the disk image.  The actual devices these aliases point to are shown with the command "devalias".  Here, we have one line for both "disk" and "net".  The device paths speak for themselves.  Note that each of these devices has a second alias: "net0" for the network device and "root" for the disk device.  These are the very same names we've given these devices in the control domain with the commands "ldm add-vnet" and "ldm add-vdisk".  Remember this, as it is very useful once you have several dozen disk devices... To wrap this up, in this part we've created a simple guest domain, complete with CPU, memory, boot disk and network connectivity.  This should be enough to get you going.  I will cover all the more advanced features and a little more theoretical background in several follow-on articles.  For some background reading, I'd recommend the following links: LDoms 2.2 Admin Guide: Setting up Guest Domains Virtual Console Server: vntsd manpage - This includes the control sequences and commands available to control the console session. OpenBoot 4.x command reference - All the things you can do at the ok prompt

    Read the article

  • Creating New Scripts Dynamically in Lua

    - by bazola
    Right now this is just a crazy idea that I had, but I was able to implement the code and get it working properly. I am not entirely sure of what the use cases would be just yet. What this code does is create a new Lua script file in the project directory. The ScriptWriter takes as arguments the file name, a table containing any arguments that the script should take when created, and a table containing any instance variables to create by default. My plan is to extend this code to create new functions based on inputs sent in during its creation as well. What makes this cool is that the new file is both generated and loaded dynamically on the fly. Theoretically you could get this code to generate and load any script imaginable. One use case I can think of is an AI that creates scripts to map out it's functions, and creates new scripts for new situations or environments. At this point, this is all theoretical, though. Here is the test code that is creating the new script and then immediately loading it and calling functions from it: function Card:doScriptWriterThing() local scriptName = "ScriptIAmMaking" local scripter = scriptWriter:new(scriptName, {"argumentName"}, {name = "'test'", one = 1}) scripter:makeFileForLoadedSettings() local loadedScript = require (scriptName) local scriptInstance = loadedScript:new("sayThis") print(scriptInstance:get_name()) --will print test print(scriptInstance:get_one()) -- will print 1 scriptInstance:set_one(10000) print(scriptInstance:get_one()) -- will print 10000 print(scriptInstance:get_argumentName()) -- will print sayThis scriptInstance:set_argumentName("saySomethingElse") print(scriptInstance:get_argumentName()) --will print saySomethingElse end Here is ScriptWriter.lua local ScriptWriter = {} local twoSpaceIndent = " " local equalsWithSpaces = " = " local newLine = "\n" --scriptNameToCreate must be a string --argumentsForNew and instanceVariablesToCreate must be tables and not nil function ScriptWriter:new(scriptNameToCreate, argumentsForNew, instanceVariablesToCreate) local instance = setmetatable({}, { __index = self }) instance.name = scriptNameToCreate instance.newArguments = argumentsForNew instance.instanceVariables = instanceVariablesToCreate instance.stringList = {} return instance end function ScriptWriter:makeFileForLoadedSettings() self:buildInstanceMetatable() self:buildInstanceCreationMethod() self:buildSettersAndGetters() self:buildReturn() self:writeStringsToFile() end --very first line of any script that will have instances function ScriptWriter:buildInstanceMetatable() table.insert(self.stringList, "local " .. self.name .. " = {}" .. newLine) table.insert(self.stringList, newLine) end --every script made this way needs a new method to create its instances function ScriptWriter:buildInstanceCreationMethod() --new() function declaration table.insert(self.stringList, ("function " .. self.name .. ":new(")) self:buildNewArguments() table.insert(self.stringList, ")" .. newLine) --first line inside :new() function table.insert(self.stringList, twoSpaceIndent .. "local instance = setmetatable({}, { __index = self })" .. newLine) --add designated arguments inside :new() self:buildNewArgumentVariables() --create the instance variables with the loaded values for key,value in pairs(self.instanceVariables) do table.insert(self.stringList, twoSpaceIndent .. "instance." .. key .. equalsWithSpaces .. value .. newLine) end --close the :new() function table.insert(self.stringList, twoSpaceIndent .. "return instance" .. newLine) table.insert(self.stringList, "end" .. newLine) table.insert(self.stringList, newLine) end function ScriptWriter:buildNewArguments() --if there are arguments for :new(), add them for key,value in ipairs(self.newArguments) do table.insert(self.stringList, value) table.insert(self.stringList, ", ") end if next(self.newArguments) ~= nil then --makes sure the table is not empty first table.remove(self.stringList) --remove the very last element, which will be the extra ", " end end function ScriptWriter:buildNewArgumentVariables() --add the designated arguments to :new() for key, value in ipairs(self.newArguments) do table.insert(self.stringList, twoSpaceIndent .. "instance." .. value .. equalsWithSpaces .. value .. newLine) end end --the instance variables need separate code because their names have to be the key and not the argument name function ScriptWriter:buildSettersAndGetters() for key,value in ipairs(self.newArguments) do self:buildArgumentSetter(value) self:buildArgumentGetter(value) table.insert(self.stringList, newLine) end for key,value in pairs(self.instanceVariables) do self:buildInstanceVariableSetter(key, value) self:buildInstanceVariableGetter(key, value) table.insert(self.stringList, newLine) end end --code for arguments passed in function ScriptWriter:buildArgumentSetter(variable) table.insert(self.stringList, "function " .. self.name .. ":set_" .. variable .. "(newValue)" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "self." .. variable .. equalsWithSpaces .. "newValue" .. newLine) table.insert(self.stringList, "end" .. newLine) end function ScriptWriter:buildArgumentGetter(variable) table.insert(self.stringList, "function " .. self.name .. ":get_" .. variable .. "()" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "return " .. "self." .. variable .. newLine) table.insert(self.stringList, "end" .. newLine) end --code for instance variable values passed in function ScriptWriter:buildInstanceVariableSetter(key, variable) table.insert(self.stringList, "function " .. self.name .. ":set_" .. key .. "(newValue)" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "self." .. key .. equalsWithSpaces .. "newValue" .. newLine) table.insert(self.stringList, "end" .. newLine) end function ScriptWriter:buildInstanceVariableGetter(key, variable) table.insert(self.stringList, "function " .. self.name .. ":get_" .. key .. "()" .. newLine) table.insert(self.stringList, twoSpaceIndent .. "return " .. "self." .. key .. newLine) table.insert(self.stringList, "end" .. newLine) end --last line of any script that will have instances function ScriptWriter:buildReturn() table.insert(self.stringList, "return " .. self.name) end function ScriptWriter:writeStringsToFile() local fileName = (self.name .. ".lua") file = io.open(fileName, 'w') for key,value in ipairs(self.stringList) do file:write(value) end file:close() end return ScriptWriter And here is what the code provided will generate: local ScriptIAmMaking = {} function ScriptIAmMaking:new(argumentName) local instance = setmetatable({}, { __index = self }) instance.argumentName = argumentName instance.name = 'test' instance.one = 1 return instance end function ScriptIAmMaking:set_argumentName(newValue) self.argumentName = newValue end function ScriptIAmMaking:get_argumentName() return self.argumentName end function ScriptIAmMaking:set_name(newValue) self.name = newValue end function ScriptIAmMaking:get_name() return self.name end function ScriptIAmMaking:set_one(newValue) self.one = newValue end function ScriptIAmMaking:get_one() return self.one end return ScriptIAmMaking All of this is generated with these calls: local scripter = scriptWriter:new(scriptName, {"argumentName"}, {name = "'test'", one = 1}) scripter:makeFileForLoadedSettings() I am not sure if I am correct that this could be useful in certain situations. What I am looking for is feedback on the readability of the code, and following Lua best practices. I would also love to hear whether this approach is a valid one, and whether the way that I have done things will be extensible.

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • How to open a server port outside of an OpenVPN tunnel with a pf firewall on OSX (BSD)

    - by Timbo
    I have a Mac mini that I use as a media server running XBMC and serves media from my NAS to my stereo and TV (which has been color calibrated with a Spyder3Express, happy). The Mac runs OSX 10.8.2 and the internet connection is tunneled for general privacy over OpenVPN through Tunnelblick. I believe my anonymous VPN provider pushes "redirect_gateway" to OpenVPN/Tunnelblick because when on it effectively tunnels all non-LAN traffic in- and outbound. As an unwanted side effect that also opens the boxes server ports unprotected to the outside world and bypasses my firewall-router (Netgear SRX5308). I have run nmap from outside the LAN on the VPN IP and the server ports on the mini are clearly visible and connectable. The mini has the following ports open: ssh/22, ARD/5900 and 8080+9090 for the XBMC iOS client Constellation. I also have Synology NAS which apart from LAN file serving over AFP and WebDAV only serves up an OpenVPN/1194 and a PPTP/1732 server. When outside of the LAN I connect to this from my laptop over OpenVPN and over PPTP from my iPhone. I only want to connect through AFP/548 from the mini to the NAS. The border firewall (SRX5308) just works excellently, stable and with a very high throughput when streaming from various VOD services. My connection is a 100/10 with a close to theoretical max throughput. The ruleset is as follows Inbound: PPTP/1723 Allow always to 10.0.0.40 (NAS/VPN server) from a restricted IP range >corresponding to possible cell provider range OpenVPN/1194 Allow always to 10.0.0.40 (NAS/VPN server) from any Outbound: Default outbound policy: Allow Always OpenVPN/1194 TCP Allow always from 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) OpenVPN/1194 UDP Allow always to 10.0.0.40 (NAS) to a.b.8.1-a.b.8.254 (VPN provider) Block always from NAS to any On the Mini I have disabled the OSX Application Level Firewall because it throws popups which don't remember my choices from one time to another and that's annoying on a media server. Instead I run Little Snitch which controls outgoing connections nicely on an application level. I have configured the excellent OSX builtin firewall pf (from BSD) as follows pf.conf (Apple App firewall tie-ins removed) (# replaced with % to avoid formatting errors) ### macro name for external interface. eth_if = "en0" vpn_if = "tap0" ### wifi_if = "en1" ### %usb_if = "en3" ext_if = $eth_if LAN="{10.0.0.0/24}" ### General housekeeping rules ### ### Drop all blocked packets silently set block-policy drop ### all incoming traffic on external interface is normalized and fragmented ### packets are reassembled. scrub in on $ext_if all fragment reassemble scrub in on $vpn_if all fragment reassemble scrub out all ### exercise antispoofing on the external interface, but add the local ### loopback interface as an exception, to prevent services utilizing the ### local loop from being blocked accidentally. ### set skip on lo0 antispoof for $ext_if inet antispoof for $vpn_if inet ### spoofing protection for all interfaces block in quick from urpf-failed ############################# block all ### Access to the mini server over ssh/22 and remote desktop/5900 from LAN/en0 only pass in on $eth_if proto tcp from $LAN to any port {22, 5900, 8080, 9090} ### Allow all udp and icmp also, necessary for Constellation. Could be tightened. pass on $eth_if proto {udp, icmp} from $LAN to any ### Allow AFP to 10.0.0.40 (NAS) pass out on $eth_if proto tcp from any to 10.0.0.40 port 548 ### Allow OpenVPN tunnel setup over unprotected link (en0) only to VPN provider IPs ### and port ranges pass on $eth_if proto tcp from any to a.b.8.0/24 port 1194:1201 ### OpenVPN Tunnel rules. All traffic allowed out, only in to ports 4100-4110 ### Outgoing pings ok pass in on $vpn_if proto {tcp, udp} from any to any port 4100:4110 pass out on $vpn_if proto {tcp, udp, icmp} from any to any So what are my goals and what does the above setup achieve? (until you tell me otherwise :) 1) Full LAN access to the above ports on the mini/media server (including through my own VPN server) 2) All internet traffic from the mini/media server is anonymized and tunneled over VPN 3) If OpenVPN/Tunnelblick on the mini drops the connection, nothing is leaked both because of pf and the router outgoing ruleset. It can't even do a DNS lookup through the router. So what do I have to hide with all this? Nothing much really, I just got carried away trying to stop port scans through the VPN tunnel :) In any case this setup works perfectly and it is very stable. The Problem at last! I want to run a minecraft server and I installed that on a separate user account on the mini server (user=mc) to keep things partitioned. I don't want this server accessible through the anonymized VPN tunnel because there are lots more port scans and hacking attempts through that than over my regular IP and I don't trust java in general. So I added the following pf rule on the mini: ### Allow Minecraft public through user mc pass in on $eth_if proto {tcp,udp} from any to any port 24983 user mc pass out on $eth_if proto {tcp, udp} from any to any user mc And these additions on the border firewall: Inbound: Allow always TCP/UDP from any to 10.0.0.40 (NAS) Outbound: Allow always TCP port 80 from 10.0.0.40 to any (needed for online account checkups) This works fine but only when the OpenVPN/Tunnelblick tunnel is down. When up no connection is possbile to the minecraft server from outside of LAN. inside LAN is always OK. Everything else functions as intended. I believe the redirect_gateway push is close to the root of the problem, but I want to keep that specific VPN provider because of the fantastic throughput, price and service. The Solution? How can I open up the minecraft server port outside of the tunnel so it's only available over en0 not the VPN tunnel? Should I a static route? But I don't know which IPs will be connecting...stumbles How secure would to estimate this setup to be and do you have other improvements to share? I've searched extensively in the last few days to no avail...If you've read this far I bet you know the answer :)

    Read the article

  • extensible database design: automatic ALTER TABLE or serialize() field BLOB ?

    - by mario
    I want an adaptable database scheme. But still use a simple table data gateway in my application, where I just pass an $data[] array for storing. The basic columns are settled in the initial table scheme. There will however arise a couple of meta fields later (ca 10-20). I want some flexibility there and not adapt the database manually each time, or -worse- change the application logic just because of new fields. So now there are two options which seem workable yet not overkill. But I'm not sure about the scalability or database drawbacks. (1) Automatic ALTER TABLE. Whenever the $data array is to be saved, the keys are compared against the current database columns. New columns get defined before the $data is INSERTed into the table. Actually seems simple enough in test code: function save($data, $table="forum") { // columns if ($new_fields = array_diff(array_keys($data), known_fields($table))) { extend_schema($table, $new_fields, $data); } // save $columns = implode("`, `", array_keys($data)); $qm = str_repeat(",?", count(array_keys($data)) - 1); echo ("INSERT INTO `$table` (`$columns`) VALUES (?$qm);"); function known_fields($table) { return unserialize(@file_get_contents("db:$table")) ?: array("id"); function extend_schema($table, $new_fields, $data) { foreach ($new_fields as $field) { echo("ALTER TABLE `$table` ADD COLUMN `$field` VARCHAR;"); Since it is mostly meta information fields, adding them just as VARCHAR seems sufficient. Nobody will query by them anyway. So the database is really just used as storage here. However, while I might want to add a lot of new $data fields on the go, they will not always be populated. (2) serialize() fields into BLOB. Any new/extraneous meta fields could be opaque to the database. Simply sorting out the virtual fields from the real database columns is simple. And the meta fields can just be serialize()d into a blob/text field then: function ext_save($data, $table="forum") { $db_fields = array("id", "content", "flags", "ext"); // disjoin foreach (array_diff(array_keys($data),$db_fields) as $key) { $data["ext"][$key] = $data[$key]; unset($data[$key]); } $data["ext"] = serialize($data["ext"]); Unserializing and unpacking this 'ext' column on read queries is a minor overhead. The advantage is that there won't be any sparsely filled columns in the database, so I guess it's compacter and faster than the AUTO ALTER TABLE approach. Of course, this method prevents ever using one of the new fields in a WHERE or GROUP BY clause. But I think none of the possible meta fields (user_agent, author_ip, author_img, votes, hits, last_modified, ..) would/should ever be used there anyway. So I currently prefer the 'ext' blob approach, even if it's a one-way ticket. How are such columns called usually? (looking for examples/doc) Would you use XML serialization for (very theoretical) in-database queries? The adapting table scheme seems a "cleaner" interface, even if most columns might remain empty then. How does that impact speed? How many such sparse VARCHAR fields can MySQL/innodb stomach? But most importantly: Is there any standard implementation for this? A pseudo ORM with automatic ALTER TABLE tricks? Storing a simple column list seems workable, but something like pdo::getColumnMeta would be more robust.

    Read the article

  • Calculate the number of ways to roll a certain number

    - by helloworld
    I'm a high school Computer Science student, and today I was given a problem to: Program Description: There is a belief among dice players that in throwing three dice a ten is easier to get than a nine. Can you write a program that proves or disproves this belief? Have the computer compute all the possible ways three dice can be thrown: 1 + 1 + 1, 1 + 1 + 2, 1 + 1 + 3, etc. Add up each of these possibilities and see how many give nine as the result and how many give ten. If more give ten, then the belief is proven. I quickly worked out a brute force solution, as such int sum,tens,nines; tens=nines=0; for(int i=1;i<=6;i++){ for(int j=1;j<=6;j++){ for(int k=1;k<=6;k++){ sum=i+j+k; //Ternary operators are fun! tens+=((sum==10)?1:0); nines+=((sum==9)?1:0); } } } System.out.println("There are "+tens+" ways to roll a 10"); System.out.println("There are "+nines+" ways to roll a 9"); Which works just fine, and a brute force solution is what the teacher wanted us to do. However, it doesn't scale, and I am trying to find a way to make an algorithm that can calculate the number of ways to roll n dice to get a specific number. Therefore, I started generating the number of ways to get each sum with n dice. With 1 die, there is obviously 1 solution for each. I then calculated, through brute force, the combinations with 2 and 3 dice. These are for two: There are 1 ways to roll a 2 There are 2 ways to roll a 3 There are 3 ways to roll a 4 There are 4 ways to roll a 5 There are 5 ways to roll a 6 There are 6 ways to roll a 7 There are 5 ways to roll a 8 There are 4 ways to roll a 9 There are 3 ways to roll a 10 There are 2 ways to roll a 11 There are 1 ways to roll a 12 Which looks straightforward enough; it can be calculated with a simple linear absolute value function. But then things start getting trickier. With 3: There are 1 ways to roll a 3 There are 3 ways to roll a 4 There are 6 ways to roll a 5 There are 10 ways to roll a 6 There are 15 ways to roll a 7 There are 21 ways to roll a 8 There are 25 ways to roll a 9 There are 27 ways to roll a 10 There are 27 ways to roll a 11 There are 25 ways to roll a 12 There are 21 ways to roll a 13 There are 15 ways to roll a 14 There are 10 ways to roll a 15 There are 6 ways to roll a 16 There are 3 ways to roll a 17 There are 1 ways to roll a 18 So I look at that, and I think: Cool, Triangular numbers! However, then I notice those pesky 25s and 27s. So it's obviously not triangular numbers, but still some polynomial expansion, since it's symmetric. So I take to Google, and I come across this page that goes into some detail about how to do this with math. It is fairly easy(albeit long) to find this using repeated derivatives or expansion, but it would be much harder to program that for me. I didn't quite understand the second and third answers, since I have never encountered that notation or those concepts in my math studies before. Could someone please explain how I could write a program to do this, or explain the solutions given on that page, for my own understanding of combinatorics? EDIT: I'm looking for a mathematical way to solve this, that gives an exact theoretical number, not by simulating dice

    Read the article

  • 8 Reasons Why Even Microsoft Agrees the Windows Desktop is a Nightmare

    - by Chris Hoffman
    Let’s be honest: The Windows desktop is a mess. Sure, it’s extremely powerful and has a huge software library, but it’s not a good experience for average people. It’s not even a good experience for geeks, although we tolerate it. Even Microsoft agrees about this. Microsoft’s Surface tablets with Windows RT don’t support any third-party desktop apps. They consider this a feature — users can’t install malware and other desktop junk, so the system will always be speedy and secure. Malware is Still Common Malware may not affect geeks, but it certainly continues to affect average people. Securing Windows, keeping it secure, and avoiding unsafe programs is a complex process. There are over 50 different file extensions that can contain harmful code to keep track of. It’s easy to have theoretical discussions about how malware could infect Mac computers, Android devices, and other systems. But Mac malware is extremely rare, and has  generally been caused by problem with the terrible Java plug-in. Macs are configured to only run executables from identified developers by default, whereas Windows will run everything. Android malware is talked about a lot, but Android malware is rare in the real world and is generally confined to users who disable security protections and install pirated apps. Google has also taken action, rolling out built-in antivirus-like app checking to all Android devices, even old ones running Android 2.3, via Play Services. Whatever the reason, Windows malware is still common while malware for other systems isn’t. We all know it — anyone who does tech support for average users has dealt with infected Windows computers. Even users who can avoid malware are stuck dealing with complex and nagging antivirus programs, especially since it’s now so difficult to trust Microsoft’s antivirus products. Manufacturer-Installed Bloatware is Terrible Sit down with a new Mac, Chromebook, iPad, Android tablet, Linux laptop, or even a Surface running Windows RT and you can enjoy using your new device. The system is a clean slate for you to start exploring and installing your new software. Sit down with a new Windows PC and the system is a mess. Rather than be delighted, you’re stuck reinstalling Windows and then installing the necessary drivers or you’re forced to start uninstalling useless bloatware programs one-by-one, trying to figure out which ones are actually useful. After uninstalling the useless programs, you may end up with a system tray full of icons for ten different hardware utilities anyway. The first experience of using a new Windows PC is frustration, not delight. Yes, bloatware is still a problem on Windows 8 PCs. Manufacturers can customize the Refresh image, preventing bloatware rom easily being removed. Finding a Desktop Program is Dangerous Want to install a Windows desktop program? Well, you’ll have to head to your web browser and start searching. It’s up to you, the user, to know which programs are safe and which are dangerous. Even if you find a website for a reputable program, the advertisements on that page will often try to trick you into downloading fake installers full of adware. While it’s great to have the ability to leave the app store and get software that the platform’s owner hasn’t approved — as on Android — this is no excuse for not providing a good, secure software installation experience for typical users installing typical programs. Even Reputable Desktop Programs Try to Install Junk Even if you do find an entirely reputable program, you’ll have to keep your eyes open while installing it. It will likely try to install adware, add browse toolbars, change your default search engine, or change your web browser’s home page. Even Microsoft’s own programs do this — when you install Skype for Windows desktop, it will attempt to modify your browser settings t ouse Bing, even if you’re specially chosen another search engine and home page. With Microsoft setting such an example, it’s no surprise so many other software developers have followed suit. Geeks know how to avoid this stuff, but there’s a reason program installers continue to do this. It works and tricks many users, who end up with junk installed and settings changed. The Update Process is Confusing On iOS, Android, and Windows RT, software updates come from a single place — the app store. On Linux, software updates come from the package manager. On Mac OS X, typical users’ software updates likely come from the Mac App Store. On the Windows desktop, software updates come from… well, every program has to create its own update mechanism. Users have to keep track of all these updaters and make sure their software is up-to-date. Most programs now have their act together and automatically update by default, but users who have old versions of Flash and Adobe Reader installed are vulnerable until they realize their software isn’t automatically updating. Even if every program updates properly, the sheer mess of updaters is clunky, slow, and confusing in comparison to a centralized update process. Browser Plugins Open Security Holes It’s no surprise that other modern platforms like iOS, Android, Chrome OS, Windows RT, and Windows Phone don’t allow traditional browser plugins, or only allow Flash and build it into the system. Browser plugins provide a wealth of different ways for malicious web pages to exploit the browser and open the system to attack. Browser plugins are one of the most popular attack vectors because of how many users have out-of-date plugins and how many plugins, especially Java, seem to be designed without taking security seriously. Oracle’s Java plugin even tries to install the terrible Ask toolbar when installing security updates. That’s right — the security update process is also used to cram additional adware into users’ machines so unscrupulous companies like Oracle can make a quick buck. It’s no wonder that most Windows PCs have an out-of-date, vulnerable version of Java installed. Battery Life is Terrible Windows PCs have bad battery life compared to Macs, IOS devices, and Android tablets, all of which Windows now competes with. Even Microsoft’s own Surface Pro 2 has bad battery life. Apple’s 11-inch MacBook Air, which has very similar hardware to the Surface Pro 2, offers double its battery life when web browsing. Microsoft has been fond of blaming third-party hardware manufacturers for their poorly optimized drivers in the past, but there’s no longer any room to hide. The problem is clearly Windows. Why is this? No one really knows for sure. Perhaps Microsoft has kept on piling Windows component on top of Windows component and many older Windows components were never properly optimized. Windows Users Become Stuck on Old Windows Versions Apple’s new OS X 10.9 Mavericks upgrade is completely free to all Mac users and supports Macs going back to 2007. Apple has also announced their intention that all new releases of Mac OS X will be free. In 2007, Microsoft had just shipped Windows Vista. Macs from the Windows Vista era are being upgraded to the latest version of the Mac operating system for free, while Windows PCs from the same era are probably still using Windows Vista. There’s no easy upgrade path for these people. They’re stuck using Windows Vista and maybe even the outdated Internet Explorer 9 if they haven’t installed a third-party web browser. Microsoft’s upgrade path is for these people to pay $120 for a full copy of Windows 8.1 and go through a complicated process that’s actaully a clean install. Even users of Windows 8 devices will probably have to pay money to upgrade to Windows 9, while updates for other operating systems are completely free. If you’re a PC geek, a PC gamer, or someone who just requires specialized software that only runs on Windows, you probably use the Windows desktop and don’t want to switch. That’s fine, but it doesn’t mean the Windows desktop is actually a good experience. Much of the burden falls on average users, who have to struggle with malware, bloatware, adware bundled in installers, complex software installation processes, and out-of-date software. In return, all they get is the ability to use a web browser and some basic Office apps that they could use on almost any other platform without all the hassle. Microsoft would agree with this, touting Windows RT and their new “Windows 8-style” app platform as the solution. Why else would Microsoft, a “devices and services” company, position the Surface — a device without traditional Windows desktop programs — as their mass-market device recommended for average people? This isn’t necessarily an endorsement of Windows RT. If you’re tech support for your family members and it comes time for them to upgrade, you may want to get them off the Windows desktop and tell them to get a Mac or something else that’s simple. Better yet, if they get a Mac, you can tell them to visit the Apple Store for help instead of calling you. That’s another thing Windows PCs don’t offer — good manufacturer support. Image Credit: Blanca Stella Mejia on Flickr, Collin Andserson on Flickr, Luca Conti on Flickr     

    Read the article

  • Another Marketing Conference, part one – the best morning sessions.

    - by Roger Hart
    Yesterday I went to Another Marketing Conference. I honestly can’t tell if the title is just tipping over into smug, but in the balance of things that doesn’t matter, because it was a good conference. There was an enjoyable blend of theoretical and practical, and enough inter-disciplinary spread to keep my inner dilettante grinning from ear to ear. Sure, there was a bumpy bit in the middle, with two back-to-back sales pitches and a rather thin overview of the state of the web. But the signal:noise ratio at AMC2012 was impressively high. Here’s the first part of my write-up of the sessions. It’s a bit of a mammoth. It’s also a bit of a mash-up of what was said and what I thought about it. I’ll add links to the videos and slides from the sessions as they become available. Although it was in the morning session, I’ve not included Vanessa Northam’s session on the power of internal comms to build brand ambassadors. It’ll be in the next roundup, as this is already pushing 2.5k words. First, the important stuff. I was keeping a tally, and nobody said “synergy” or “leverage”. I did, however, hear the term “marketeers” six times. Shame on you – you know who you are. 1 – Branding in a post-digital world, Graham Hales This initially looked like being a sales presentation for Interbrand, but Graham pulled it out of the bag a few minutes in. He introduced a model for brand management that was essentially Plan >> Do >> Check >> Act, with Do and Check rolled up together, and went on to stress that this looks like on overall business management model for a reason. Brand has to be part of your overall business strategy and metrics if you’re going to care about it at all. This was the first iteration of what proved to be one of the event’s emergent themes: do it throughout the stack or don’t bother. Graham went on to remind us that brands, in so far as they are owned at all, are owned by and co-created with our customers. Advertising can offer a message to customers, but they provide the expression of a brand. This was a preface to talking about an increasingly chaotic marketplace, with increasingly hard-to-manage purchase processes. Services like Amazon reviews and TripAdvisor (four presenters would make this point) saturate customers with information, and give them a kind of vigilante power to comment on and define brands. Consequentially, they experience a number of “moments of deflection” in our sales funnels. Our control is lessened, and failure to engage can negatively-impact buying decisions increasingly poorly. The clearest example given was the failure of NatWest’s “caring bank” campaign, where staff in branches, customer support, and online presences didn’t align. A discontinuity of experience basically made the campaign worthless, and disgruntled customers talked about it loudly on social media. This in turn presented an opportunity to engage and show caring, but that wasn’t taken. What I took away was that brand (co)creation is ongoing and needs monitoring and metrics. But reciprocally, given you get what you measure, strategy and metrics must include brand if any kind of branding is to work at all. Campaigns and messages must permeate product and service design. What that doesn’t mean (and Graham didn’t say it did) is putting Marketing at the top of the pyramid, and having them bawl demands at Product Management, Support, and Development like an entitled toddler. It’s going to have to be collaborative, and session 6 on internal comms handled this really well. The main thing missing here was substantiating data, and the main question I found myself chewing on was: if we’re building brands collaboratively and in the open, what about the cultural politics of trolling? 2 – Challenging our core beliefs about human behaviour, Mark Earls This was definitely the best show of the day. It was also some of the best content. Mark talked us through nudging, behavioural economics, and some key misconceptions around decision making. Basically, people aren’t rational, they’re petty, reactive, emotional sacks of meat, and they’ll go where they’re led. Comforting stuff. Examples given were the spread of the London Riots and the “discovery” of the mountains of Kong, and the popularity of Susan Boyle, which, in turn made me think about Per Mollerup’s concept of “social wayshowing”. Mark boiled his thoughts down into four key points which I completely failed to write down word for word: People do, then think – Changing minds to change behaviour doesn’t work. Post-rationalization rules the day. See also: mere exposure effects. Spock < Kirk - Emotional/intuitive comes first, then we rationalize impulses. The non-thinking, emotive, reactive processes run much faster than the deliberative ones. People are not really rational decision makers, so  intervening with information may not be appropriate. Maximisers or satisficers? – Related to the last point. People do not consistently, rationally, maximise. When faced with an abundance of choice, they prefer to satisfice than evaluate, and will often follow social leads rather than think. Things tend to converge – Behaviour trends to a consensus normal. When faced with choices people overwhelmingly just do what they see others doing. Humans are extraordinarily good at mirroring behaviours and receiving influence. People “outsource the cognitive load” of choices to the crowd. Mark’s headline quote was probably “the real influence happens at the table next to you”. Reference examples, word of mouth, and social influence are tremendously important, and so talking about product experiences may be more important than talking about products. This reminded me of Kathy Sierra’s “creating bad-ass users” concept of designing to make people more awesome rather than products they like. If we can expose user-awesome, and make sharing easy, we can normalise the behaviours we want. If we normalize the behaviours we want, people should make and post-rationalize the buying decisions we want.  Where we need to be: “A bigger boy made me do it” Where we are: “a wizard did it and ran away” However, it’s worth bearing in mind that some purchasing decisions are personal and informed rather than social and reactive. There’s a quadrant diagram, in fact. What was really interesting, though, towards the end of the talk, was some advice for working out how social your products might be. The standard technology adoption lifecycle graph is essentially about social product diffusion. So this idea isn’t really new. Geoffrey Moore’s “chasm” idea may not strictly apply. However, his concepts of beachheads and reference segments are exactly what is required to normalize and thus enable purchase decisions (behaviour change). The final thing is that in only very few categories does a better product actually affect purchase decision. Where the choice is personal and informed, this is true. But where it’s personal and impulsive, or in any way social, “better” is trumped by popularity, endorsement, or “point of sale salience”. UX, UCD, and e-commerce know this to be true. A better (and easier) experience will always beat “more features”. Easy to use, and easy to observe being used will beat “what the user says they want”. This made me think about the astounding stickiness of rational fallacies, “common sense” and the pathological willful simplifications of the media. Rational fallacies seem like they’re basically the heuristics we use for post-rationalization. If I were profoundly grimy and cynical, I’d suggest deploying a boat-load in our messaging, to see if they’re really as sticky and appealing as they look. 4 – Changing behaviour through communication, Stephen Donajgrodzki This was a fantastic follow up to Mark’s session. Stephen basically talked us through some tactics used in public information/health comms that implement the kind of behavioural theory Mark introduced. The session was largely about how to get people to do (good) things they’re predisposed not to do, and how communication can (and can’t) make positive interventions. A couple of things stood out, in particular “implementation intentions” and how they can be linked to goals. For example, in order to get people to check and test their smoke alarms (a goal intention, rarely actualized  an information campaign will attempt to link this activity to the clocks going back or forward (a strong implementation intention, well-actualized). The talk reinforced the idea that making behaviour changes easy and visible normalizes them and makes them more likely to succeed. To do this, they have to be embodied throughout a product and service cycle. Experiential disconnects undermine the normalization. So campaigns, products, and customer interactions must be aligned. This is underscored by the second section of the presentation, which talked about interventions and pre-conditions for change. Taking the examples of drug addiction and stopping smoking, Stephen showed us a framework for attempting (and succeeding or failing in) behaviour change. He noted that when the change is something people fundamentally want to do, and that is easy, this gets a to simpler. Coordinated, easily-observed environmental pressures create preconditions for change and build motivation. (price, pub smoking ban, ad campaigns, friend quitting, declining social acceptability) A triggering even leads to a change attempt. (getting a cold and panicking about how bad the cough is) Interventions can be made to enable an attempt (NHS services, public information, nicotine patches) If it succeeds – yay. If it fails, there’s strong negative enforcement. Triggering events seem largely personal, but messaging can intervene in the creation of preconditions and in supporting decisions. Stephen talked more about systems of thinking and “bounded rationality”. The idea being that to enable change you need to break through “automatic” thinking into “reflective” thinking. Disruption and emotion are great tools for this, but that is only the start of the process. It occurs to me that a great deal of market research is focused on determining triggers rather than analysing necessary preconditions. Although they are presumably related. The final section talked about setting goals. Marketing goals are often seen as deriving directly from business goals. However, marketing may be unable to deliver on these directly where decision and behaviour-change processes are involved. In those cases, marketing and communication goals should be to create preconditions. They should also consider priming and norms. Content marketing and brand awareness are good first steps here, as brands can be heuristics in decision making for choice-saturated consumers, or those seeking education. 5 – The power of engaged communities and how to build them, Harriet Minter (the Guardian) The meat of this was that you need to let communities define and establish themselves, and be quick to react to their needs. Harriet had been in charge of building the Guardian’s community sites, and learned a lot about how they come together, stabilize  grow, and react. Crucially, they can’t be about sales or push messaging. A community is not just an audience. It’s essential to start with what this particular segment or tribe are interested in, then what they want to hear. Eventually you can consider – in light of this – what they might want to buy, but you can’t start with the product. A community won’t cohere around one you’re pushing. Her tips for community building were (again, sorry, not verbatim): Set goals Have some targets. Community building sounds vague and fluffy, but you can have (and adjust) concrete goals. Think like a start-up This is the “lean” stuff. Try things, fail quickly, respond. Don’t restrict platforms Let the audience choose them, and be aware of their differences. For example, LinkedIn is very different to Twitter. Track your stats Related to the first point. Keeping an eye on the numbers lets you respond. They should be qualified, however. If you want a community of enterprise decision makers, headcount alone may be a bad metric – have you got CIOs, or just people who want to get jobs by mingling with CIOs? Build brand advocates Do things to involve people and make them awesome, and they’ll cheer-lead for you. The last part really got my attention. Little bits of drive-by kindness go a long way. But more than that, genuinely helping people turns them into powerful advocates. Harriet gave an example of the Guardian engaging with an aspiring journalist on its Q&A forums. Through a series of serendipitous encounters he became a BBC producer, and now enthusiastically speaks up for the Guardian community sites. Cultivating many small, authentic, influential voices may have a better pay-off than schmoozing the big guys. This could be particularly important in the context of Mark and Stephen’s models of social, endorsement-led, and example-led decision making. There’s a lot here I haven’t covered, and it may be worth some follow-up on community building. Thoughts I was quite sceptical of nudge theory and behavioural economics. First off it sounds too good to be true, and second it sounds too sinister to permit. But I haven’t done the background reading. So I’m going to, and if it seems to hold real water, and if it’s possible to do it ethically (Stephen’s presentations suggests it may be) then it’s probably worth exploring. The message seemed to be: change what people do, and they’ll work out why afterwards. Moreover, the people around them will do it too. Make the things you want them to do extraordinarily easy and very, very visible. Normalize and support the decisions you want them to make, and they’ll make them. In practice this means not talking about the thing, but showing the user-awesome. Glib? Perhaps. But it feels worth considering. Also, if I ever run a marketing conference, I’m going to ban speakers from using examples from Apple. Quite apart from not being consistently generalizable, it’s becoming an irritating cliché.

    Read the article

  • Pain Comes Instantly

    - by user701213
    When I look back at recent blog entries – many of which are not all that current (more on where my available writing time is going later) – I am struck by how many of them focus on public policy or legislative issues instead of, say, the latest nefarious cyberattack or exploit (or everyone’s favorite new pastime: coining terms for the Coming Cyberpocalypse: “digital Pearl Harbor” is so 1941). Speaking of which, I personally hope evil hackers from Malefactoria will someday hack into my bathroom scale – which in a future time will be connected to the Internet because, gosh, wouldn’t it be great to have absolutely everything in your life Internet-enabled? – and recalibrate it so I’m 10 pounds thinner. The horror. In part, my focus on public policy is due to an admitted limitation of my skill set. I enjoy reading technical articles about exploits and cybersecurity trends, but writing a blog entry on those topics would take more research than I have time for and, quite honestly, doesn’t play to my strengths. The first rule of writing is “write what you know.” The bigger contributing factor to my recent paucity of blog entries is that more and more of my waking hours are spent engaging in “thrust and parry” activity involving emerging regulations of some sort or other. I’ve opined in earlier blogs about what constitutes good and reasonable public policy so nobody can accuse me of being reflexively anti-regulation. That said, you have so many cycles in the day, and most of us would rather spend it slaying actual dragons than participating in focus groups on whether dragons are really a problem, whether lassoing them (with organic, sustainable and recyclable lassos) is preferable to slaying them – after all, dragons are people, too - and whether we need lasso compliance auditors to make sure lassos are being used correctly and humanely. (A point that seems to evade many rule makers: slaying dragons actually accomplishes something, whereas talking about “approved dragon slaying procedures and requirements” wastes the time of those who are competent to dispatch actual dragons and who were doing so very well without the input of “dragon-slaying theorists.”) Unfortunately for so many of us who would just get on with doing our day jobs, cybersecurity is rapidly devolving into the “focus groups on dragon dispatching” realm, which actual dragons slayers have little choice but to participate in. The general trend in cybersecurity is that powers-that-be – which encompasses groups other than just legislators – are often increasingly concerned and therefore feel they need to Do Something About Cybersecurity. Many seem to believe that if only we had the right amount of regulation and oversight, there would be no data breaches: a breach simply must mean Someone Is At Fault and Needs Supervision. (Leaving aside the fact that we have lots of home invasions despite a) guard dogs b) liberal carry permits c) alarm systems d) etc.) Also note that many well-managed and security-aware organizations, like the US Department of Defense, still get hacked. More specifically, many powers-that-be feel they must direct industry in a multiplicity of ways, up to and including how we actually build and deploy information technology systems. The more prescriptive the requirement, the more regulators or overseers a) can be seen to be doing something b) feel as if they are doing something regardless of whether they are actually doing something useful or cost effective. Note: an unfortunate concomitant of Doing Something is that often the cure is worse than the ailment. That is, doing what overseers want creates unfortunate byproducts that they either didn’t foresee or worse, don’t care about. After all, the logic goes, we Did Something. Prescriptive practice in the IT industry is problematic for a number of reasons. For a start, prescriptive guidance is really only appropriate if: • It is cost effective• It is “current” (meaning, the guidance doesn’t require the use of the technical equivalent of buggy whips long after horse-drawn transportation has become passé)*• It is practical (that is, pragmatic, proven and effective in the real world, not theoretical and unproven)• It solves the right problem With the above in mind, heading up the list of “you must be joking” regulations are recent disturbing developments in the Payment Card Industry (PCI) world. I’d like to give PCI kahunas the benefit of the doubt about their intentions, except that efforts by Oracle among others to make them aware of “unfortunate side effects of your requirements” – which is as tactful I can be for reasons that I believe will become obvious below - have gone, to-date, unanswered and more importantly, unchanged. A little background on PCI before I get too wound up. In 2008, the Payment Card Industry (PCI) Security Standards Council (SSC) introduced the Payment Application Data Security Standard (PA-DSS). That standard requires vendors of payment applications to ensure that their products implement specific requirements and undergo security assessment procedures. In order to have an application listed as a Validated Payment Application (VPA) and available for use by merchants, software vendors are required to execute the PCI Payment Application Vendor Release Agreement (VRA). (Are you still with me through all the acronyms?) Beginning in August 2010, the VRA imposed new obligations on vendors that are extraordinary and extraordinarily bad, short-sighted and unworkable. Specifically, PCI requires vendors to disclose (dare we say “tell all?”) to PCI any known security vulnerabilities and associated security breaches involving VPAs. ASAP. Think about the impact of that. PCI is asking a vendor to disclose to them: • Specific details of security vulnerabilities • Including exploit information or technical details of the vulnerability • Whether or not there is any mitigation available (as in a patch) PCI, in turn, has the right to blab about any and all of the above – specifically, to distribute all the gory details of what is disclosed - to the PCI SSC, qualified security assessors (QSAs), and any affiliate or agent or adviser of those entities, who are in turn permitted to share it with their respective affiliates, agents, employees, contractors, merchants, processors, service providers and other business partners. This assorted crew can’t be more than, oh, hundreds of thousands of entities. Does anybody believe that several hundred thousand people can keep a secret? Or that several hundred thousand people are all equally trustworthy? Or that not one of the people getting all that information would blab vulnerability details to a bad guy, even by accident? Or be a bad guy who uses the information to break into systems? (Wait, was that the Easter Bunny that just hopped by? Bringing world peace, no doubt.) Sarcasm aside, common sense tells us that telling lots of people a secret is guaranteed to “unsecret” the secret. Notably, being provided details of a vulnerability (without a patch) is of little or no use to companies running the affected application. Few users have the technological sophistication to create a workaround, and even if they do, most workarounds break some other functionality in the application or surrounding environment. Also, given the differences among corporate implementations of any application, it is highly unlikely that a single workaround is going to work for all corporate users. So until a patch is developed by the vendor, users remain at risk of exploit: even more so if the details of vulnerability have been widely shared. Sharing that information widely before a patch is available therefore does not help users, and instead helps only those wanting to exploit known security bugs. There’s a shocker for you. Furthermore, we already know that insider information about security vulnerabilities inevitably leaks, which is why most vendors closely hold such information and limit dissemination until a patch is available (and frequently limit dissemination of technical details even with the release of a patch). That’s the industry norm, not that PCI seems to realize or acknowledge that. Why would anybody release a bunch of highly technical exploit information to a cast of thousands, whose only “vetting” is that they are members of a PCI consortium? Oracle has had personal experience with this problem, which is one reason why information on security vulnerabilities at Oracle is “need to know” (we use our own row level access control to limit access to security bugs in our bug database, and thus less than 1% of development has access to this information), and we don’t provide some customers with more information than others or with vulnerability information and/or patches earlier than others. Failure to remember “insider information always leaks” creates problems in the general case, and has created problems for us specifically. A number of years ago, one of the UK intelligence agencies had information about a non-public security vulnerability in an Oracle product that they circulated among other UK and Commonwealth defense and intelligence entities. Nobody, it should be pointed out, bothered to report the problem to Oracle, even though only Oracle could produce a patch. The vulnerability was finally reported to Oracle by (drum roll) a US-based commercial company, to whom the information had leaked. (Note: every time I tell this story, the MI-whatever agency that created the problem gets a bit shirty with us. I know they meant well and have improved their vulnerability handling/sharing processes but, dudes, next time you find an Oracle vulnerability, try reporting it to us first before blabbing to lots of people who can’t actually fix the problem. Thank you!) Getting back to PCI: clearly, these new disclosure obligations increase the risk of exploitation of a vulnerability in a VPA and thus, of misappropriation of payment card data and customer information that a VPA processes, stores or transmits. It stands to reason that VRA’s current requirement for the widespread distribution of security vulnerability exploit details -- at any time, but particularly before a vendor can issue a patch or a workaround -- is very poor public policy. It effectively publicizes information of great value to potential attackers while not providing compensating benefits - actually, any benefits - to payment card merchants or consumers. In fact, it magnifies the risk to payment card merchants and consumers. The risk is most prominent in the time before a patch has been released, since customers often have little option but to continue using an application or system despite the risks. However, the risk is not limited to the time before a patch is issued: customers often need days, or weeks, to apply patches to systems, based upon the complexity of the issue and dependence on surrounding programs. Rather than decreasing the available window of exploit, this requirement increases the available window of exploit, both as to time available to exploit a vulnerability and the ease with which it can be exploited. Also, why would hackers focus on finding new vulnerabilities to exploit if they can get “EZHack” handed to them in such a manner: a) a vulnerability b) in a payment application c) with exploit code: the “Hacking Trifecta!“ It’s fair to say that this is probably the exact opposite of what PCI – or any of us – would want. Established industry practice concerning vulnerability handling avoids the risks created by the VRA’s vulnerability disclosure requirements. Specifically, the norm is not to release information about a security bug until the associated patch (or a pretty darn good workaround) has been issued. Once a patch is available, the notice to the user community is a high-level communication discussing the product at issue, the level of risk associated with the vulnerability, and how to apply the patch. The notices do not include either the specific customers affected by the vulnerability or forensic reports with maps of the exploit (both of which are required by the current VRA). In this way, customers have the tools they need to prioritize patching and to help prevent an attack, and the information released does not increase the risk of exploit. Furthermore, many vendors already use industry standards for vulnerability description: Common Vulnerability Enumeration (CVE) and Common Vulnerability Scoring System (CVSS). CVE helps ensure that customers know which particular issues a patch addresses and CVSS helps customers determine how severe a vulnerability is on a relative scale. Industry already provides the tools customers need to know what the patch contains and how bad the problem is that the patch remediates. So, what’s a poor vendor to do? Oracle is reaching out to other vendors subject to PCI and attempting to enlist then in a broad effort to engage PCI in rethinking (that is, eradicating) these requirements. I would therefore urge all who care about this issue, but especially those in the vendor community whose applications are subject to PCI and who may not have know they were being asked to tell-all to PCI and put their customers at risk, to do one of the following: • Contact PCI with your concerns• Contact Oracle (we are looking for vendors to sign our statement of concern)• And make sure you tell your customers that you have to rat them out to PCI if there is a breach involving the payment application I like to be charitable and say “PCI meant well” but in as important a public policy issue as what you disclose about vulnerabilities, to whom and when, meaning well isn’t enough. We need to do well. PCI, as regards this particular issue, has not done well, and has compounded the error by thus far being nonresponsive to those of us who have labored mightily to try to explain why they might want to rethink telling the entire planet about security problems with no solutions. By Way of Explanation… Non-related to PCI whatsoever, and the explanation for why I have not been blogging a lot recently, I have been working on Other Writing Venues with my sister Diane (who has also worked in the tech sector, inflicting upgrades on unsuspecting and largely ungrateful end users). I am pleased to note that we have recently (self-)published the first in the Miss Information Technology Murder Mystery series, Outsourcing Murder. The genre might best be described as “chick lit meets geek scene.” Our sisterly nom de plume is Maddi Davidson and (shameless plug follows): you can order the paper version of the book on Amazon, or the Kindle or Nook versions on www.amazon.com or www.bn.com, respectively. From our book jacket: Emma Jones, a 20-something IT consultant, is working on an outsourcing project at Tahiti Tacos, a restaurant chain offering Polynexican cuisine: refried poi, anyone? Emma despises her boss Padmanabh, a brilliant but arrogant partner in GD Consulting. When Emma discovers His-Royal-Padness’s body (verdict: death by cricket bat), she becomes a suspect.With her overprotective family and her best friend Stacey providing endless support and advice, Emma stumbles her way through an investigation of Padmanabh’s murder, bolstered by fusion food feeding frenzies, endless cups of frou-frou coffee and serious surfing sessions. While Stacey knows a PI who owes her a favor, landlady Magda urges Emma to tart up her underwear drawer before the next cute cop with a search warrant arrives. Emma’s mother offers to fix her up with a PhD student at Berkeley and showers her with self-defense gizmos while her old lover Keoni beckons from Hawai’i. And everyone, even Shaun the barista, knows a good lawyer. Book 2, Denial of Service, is coming out this summer. * Given the rate of change in technology, today’s “thou shalts” are easily next year’s “buggy whip guidance.”

    Read the article

  • problems in trying ieee 802.15.4 working from msk

    - by asel
    Hi, i took a msk code from dsplog.com and tried to modify it to test the ieee 802.15.4. There are several links on that site for ieee 802.15.4. Currently I am getting simulated ber results all approximately same for all the cases of Eb_No values. Can you help me to find why? thanks in advance! clear PN = [ 1 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0; 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0; 0 0 1 0 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 0 1 0 1 0 0 1 0; 0 0 1 0 0 0 1 0 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1 0 1 0 1; 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 0 0 0 1 1; 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 1 1 0 1 1 0 0 1 1 1 0 0; 1 1 0 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 1 1 0 1 1 0 0 1; 1 0 0 1 1 1 0 0 0 0 1 1 0 1 0 1 0 0 1 0 0 0 1 0 1 1 1 0 1 1 0 1; 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1 1 0 1 1; 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1; 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 1; 0 1 1 1 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0; 0 0 0 0 0 1 1 1 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1 0 1 1 0; 0 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0 1 0 0 1; 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1 1 0 1 1 1 0 0 0 1 1 0 0; 1 1 0 0 1 0 0 1 0 1 1 0 0 0 0 0 0 1 1 1 0 1 1 1 1 0 1 1 1 0 0 0; ]; N = 5*10^5; % number of bits or symbols fsHz = 1; % sampling period T = 4; % symbol duration Eb_N0_dB = [0:10]; % multiple Eb/N0 values ct = cos(pi*[-T:N*T-1]/(2*T)); st = sin(pi*[-T:N*T-1]/(2*T)); for ii = 1:length(Eb_N0_dB) tx = []; % MSK Transmitter ipBit = round(rand(1,N/32)*15); for k=1:length(ipBit) sym = ipBit(k); tx = [tx PN((sym+1),1:end)]; end ipMod = 2*tx - 1; % BPSK modulation 0 -> -1, 1 -> 1 ai = kron(ipMod(1:2:end),ones(1,2*T)); % even bits aq = kron(ipMod(2:2:end),ones(1,2*T)); % odd bits ai = [ai zeros(1,T) ]; % padding with zero to make the matrix dimension match aq = [zeros(1,T) aq ]; % adding delay of T for Q-arm % MSK transmit waveform xt = 1/sqrt(T)*[ai.*ct + j*aq.*st]; % Additive White Gaussian Noise nt = 1/sqrt(2)*[randn(1,N*T+T) + j*randn(1,N*T+T)]; % white gaussian noise, 0dB variance % Noise addition yt = xt + 10^(-Eb_N0_dB(ii)/20)*nt; % additive white gaussian noise % MSK receiver % multiplying with cosine and sine waveforms xE = conv(real(yt).*ct,ones(1,2*T)); xO = conv(imag(yt).*st,ones(1,2*T)); bHat = zeros(1,N); bHat(1:2:end) = xE(2*T+1:2*T:end-2*T); % even bits bHat(2:2:end) = xO(3*T+1:2*T:end-T); % odd bits result=zeros(16,1); chiplen=32; seqstart=1; recovered = []; while(seqstart<length(bHat)) A = bHat(seqstart:seqstart+(chiplen-1)); for j=1:16 B = PN(j,1:end); result(j)=sum(A.*B); end [value,index] = max(result); recovered = [recovered (index-1)]; seqstart = seqstart+chiplen; end; %# create binary string - the 4 forces at least 4 bits bstr1 = dec2bin(ipBit,4); bstr2 = dec2bin(recovered,4); %# convert back to numbers (reshape so that zeros are preserved) out1 = str2num(reshape(bstr1',[],1))'; out2 = str2num(reshape(bstr2',[],1))'; % counting the errors nErr(ii) = size(find([out1 - out2]),2); end nErr/(length(ipBit)*4) % simulated ber theoryBer = 0.5*erfc(sqrt(10.^(Eb_N0_dB/10))) % theoretical ber

    Read the article

  • Bacula & Multiple Tape Devices, and so on

    - by Tom O'Connor
    Bacula won't make use of 2 tape devices simultaneously. (Search for #-#-# for the TL;DR) A little background, perhaps. In the process of trying to get a decent working backup solution (backing up 20TB ain't cheap, or easy) at $dayjob, we bought a bunch of things to make it work. Firstly, there's a Spectra Logic T50e autochanger, 40 slots of LTO5 goodness, and that robot's got a pair of IBM HH5 Ultrium LTO5 drives, connected via FibreChannel Arbitrated Loop to our backup server. There's the backup server.. A Dell R715 with 2x 16 core AMD 62xx CPUs, and 32GB of RAM. Yummy. That server's got 2 Emulex FCe-12000E cards, and an Intel X520-SR dual port 10GE NIC. We were also sold Commvault Backup (non-NDMP). Here's where it gets really complicated. Spectra Logic and Commvault both sent respective engineers, who set up the library and the software. Commvault was running fine, in so far as the controller was working fine. The Dell server has Ubuntu 12.04 server, and runs the MediaAgent for CommVault, and mounts our BlueArc NAS as NFS to a few mountpoints, like /home, and some stuff in /mnt. When backing up from the NFS mountpoints, we were seeing ~= 290GB/hr throughput. That's CRAP, considering we've got 20-odd TB to get through, in a <48 hour backup window. The rated maximum on the BlueArc is 700MB/s (2460GB/hr), the rated maximum write speed on the tape devices is 140MB/s, per drive, so that's 492GB/hr (or double it, for the total throughput). So, the next step was to benchmark NFS performance with IOzone, and it turns out that we get epic write performance (across 20 threads), and it's like 1.5-2.5TB/hr write, but read performance is fecking hopeless. I couldn't ever get higher than 343GB/hr maximum. So let's assume that the 343GB/hr is a theoretical maximum for read performance on the NAS, then we should in theory be able to get that performance out of a) CommVault, and b) any other backup agent. Not the case. Commvault seems to only ever give me 200-250GB/hr throughput, and out of experimentation, I installed Bacula to see what the state of play there is. If, for example, Bacula gave consistently better performance and speeds than Commvault, then we'd be able to say "**$.$ Refunds Plz $.$**" #-#-# Alas, I found a different problem with Bacula. Commvault seems pretty happy to read from one part of the mountpoint with one thread, and stream that to a Tape device, whilst reading from some other directory with the other thread, and writing to the 2nd drive in the autochanger. I can't for the life of me get Bacula to mount and write to two tape drives simultaneously. Things I've tried: Setting Maximum Concurrent Jobs = 20 in the Director, File and Storage Daemons Setting Prefer Mounted Volumes = no in the Job Definition Setting multiple devices in the Autochanger resource. Documentation seems to be very single-drive centric, and we feel a little like we've strapped a rocket to a hamster, with this one. The majority of example Bacula configurations are for DDS4 drives, manual tape swapping, and FreeBSD or IRIX systems. I should probably add that I'm not too bothered if this isn't possible, but I'd be surprised. I basically want to use Bacula as proof to stick it to the software vendors that they're overpriced ;) I read somewhere that @KyleBrandt has done something similar with a modern Tape solution.. Configuration Files: *bacula-dir.conf* # # Default Bacula Director Configuration file Director { # define myself Name = backuphost-1-dir DIRport = 9101 # where we listen for UA connections QueryFile = "/etc/bacula/scripts/query.sql" WorkingDirectory = "/var/lib/bacula" PidDirectory = "/var/run/bacula" Maximum Concurrent Jobs = 20 Password = "yourekiddingright" # Console password Messages = Daemon DirAddress = 0.0.0.0 #DirAddress = 127.0.0.1 } JobDefs { Name = "DefaultFileJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = File Messages = Standard Pool = File Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" } JobDefs { Name = "DefaultTapeJob" Type = Backup Level = Incremental Client = backuphost-1-fd FileSet = "Full Set" Schedule = "WeeklyCycle" Storage = "SpectraLogic" Messages = Standard Pool = AllTapes Priority = 10 Write Bootstrap = "/var/lib/bacula/%c.bsr" Prefer Mounted Volumes = no } # # Define the main nightly save backup job # By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir Job { Name = "BackupClient1" JobDefs = "DefaultFileJob" } Job { Name = "BackupThisVolume" JobDefs = "DefaultTapeJob" FileSet = "SpecialVolume" } #Job { # Name = "BackupClient2" # Client = backuphost-12-fd # JobDefs = "DefaultJob" #} # Backup the catalog database (after the nightly save) Job { Name = "BackupCatalog" JobDefs = "DefaultFileJob" Level = Full FileSet="Catalog" Schedule = "WeeklyCycleAfterBackup" # This creates an ASCII copy of the catalog # Arguments to make_catalog_backup.pl are: # make_catalog_backup.pl <catalog-name> RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog" # This deletes the copy of the catalog RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup" Write Bootstrap = "/var/lib/bacula/%n.bsr" Priority = 11 # run after main backup } # # Standard Restore template, to be changed by Console program # Only one such job is needed for all Jobs/Clients/Storage ... # Job { Name = "RestoreFiles" Type = Restore Client=backuphost-1-fd FileSet="Full Set" Storage = File Pool = Default Messages = Standard Where = /srv/bacula/restore } FileSet { Name = "SpecialVolume" Include { Options { signature = MD5 } File = /mnt/SpecialVolume } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } # List of files to be backed up FileSet { Name = "Full Set" Include { Options { signature = MD5 } File = /usr/sbin } Exclude { File = /var/lib/bacula File = /nonexistant/path/to/file/archive/dir File = /proc File = /tmp File = /.journal File = /.fsck } } Schedule { Name = "WeeklyCycle" Run = Full 1st sun at 23:05 Run = Differential 2nd-5th sun at 23:05 Run = Incremental mon-sat at 23:05 } # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup" Run = Full sun-sat at 23:10 } # This is the backup of the catalog FileSet { Name = "Catalog" Include { Options { signature = MD5 } File = "/var/lib/bacula/bacula.sql" } } # Client (File Services) to backup Client { Name = backuphost-1-fd Address = localhost FDPort = 9102 Catalog = MyCatalog Password = "surelyyourejoking" # password for FileDaemon File Retention = 30 days # 30 days Job Retention = 6 months # six months AutoPrune = yes # Prune expired Jobs/Files } # # Second Client (File Services) to backup # You should change Name, Address, and Password before using # #Client { # Name = backuphost-12-fd # Address = localhost2 # FDPort = 9102 # Catalog = MyCatalog # Password = "i'mnotjokinganddontcallmeshirley" # password for FileDaemon 2 # File Retention = 30 days # 30 days # Job Retention = 6 months # six months # AutoPrune = yes # Prune expired Jobs/Files #} # Definition of file storage device Storage { Name = File # Do not use "localhost" here Address = localhost # N.B. Use a fully qualified name here SDPort = 9103 Password = "lalalalala" Device = FileStorage Media Type = File } Storage { Name = "SpectraLogic" Address = localhost SDPort = 9103 Password = "linkedinmakethebestpasswords" Device = Drive-1 Device = Drive-2 Media Type = LTO5 Autochanger = yes } # Generic catalog service Catalog { Name = MyCatalog # Uncomment the following line if you want the dbi driver # dbdriver = "dbi:sqlite3"; dbaddress = 127.0.0.1; dbport = dbname = "bacula"; DB Address = ""; dbuser = "bacula"; dbpassword = "bbmaster63" } # Reasonable message delivery -- send most everything to email address # and to the console Messages { Name = Standard mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r" operatorcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r" mail = root@localhost = all, !skipped operator = root@localhost = mount console = all, !skipped, !saved # # WARNING! the following will create a file that you must cycle from # time to time as it will grow indefinitely. However, it will # also keep all your messages if they scroll off the console. # append = "/var/lib/bacula/log" = all, !skipped catalog = all } # # Message delivery for daemon messages (no job). Messages { Name = Daemon mailcommand = "/usr/lib/bacula/bsmtp -h localhost -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r" mail = root@localhost = all, !skipped console = all, !skipped, !saved append = "/var/lib/bacula/log" = all, !skipped } # Default pool definition Pool { Name = Default Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year } # File Pool definition Pool { Name = File Pool Type = Backup Recycle = yes # Bacula can automatically recycle Volumes AutoPrune = yes # Prune expired volumes Volume Retention = 365 days # one year Maximum Volume Bytes = 50G # Limit Volume size to something reasonable Maximum Volumes = 100 # Limit number of Volumes in Pool } Pool { Name = AllTapes Pool Type = Backup Recycle = yes AutoPrune = yes # Prune expired volumes Volume Retention = 31 days # one Moth } # Scratch pool definition Pool { Name = Scratch Pool Type = Backup } # # Restricted console used by tray-monitor to get the status of the director # Console { Name = backuphost-1-mon Password = "LastFMalsostorePasswordsLikeThis" CommandACL = status, .status } bacula-sd.conf # # Default Bacula Storage Daemon Configuration file # Storage { # definition of myself Name = backuphost-1-sd SDPort = 9103 # Director's port WorkingDirectory = "/var/lib/bacula" Pid Directory = "/var/run/bacula" Maximum Concurrent Jobs = 20 SDAddress = 0.0.0.0 # SDAddress = 127.0.0.1 } # # List Directors who are permitted to contact Storage daemon # Director { Name = backuphost-1-dir Password = "passwordslinplaintext" } # # Restricted Director, used by tray-monitor to get the # status of the storage daemon # Director { Name = backuphost-1-mon Password = "totalinsecurityabound" Monitor = yes } Device { Name = FileStorage Media Type = File Archive Device = /srv/bacula/archive LabelMedia = yes; # lets Bacula label unlabeled media Random Access = Yes; AutomaticMount = yes; # when device opened, read it RemovableMedia = no; AlwaysOpen = no; } Autochanger { Name = SpectraLogic Device = Drive-1 Device = Drive-2 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg4 } Device { Name = Drive-1 Drive Index = 0 Archive Device = /dev/nst0 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } Device { Name = Drive-2 Drive Index = 1 Archive Device = /dev/nst1 Changer Device = /dev/sg4 Media Type = LTO5 AutoChanger = yes RemovableMedia = yes; AutomaticMount = yes; AlwaysOpen = yes; RandomAccess = no; LabelMedia = yes } # # Send all messages to the Director, # mount messages also are sent to the email address # Messages { Name = Standard director = backuphost-1-dir = all } bacula-fd.conf # # Default Bacula File Daemon Configuration file # # # List Directors who are permitted to contact this File daemon # Director { Name = backuphost-1-dir Password = "hahahahahaha" } # # Restricted Director, used by tray-monitor to get the # status of the file daemon # Director { Name = backuphost-1-mon Password = "hohohohohho" Monitor = yes } # # "Global" File daemon configuration specifications # FileDaemon { # this is me Name = backuphost-1-fd FDport = 9102 # where we listen for the director WorkingDirectory = /var/lib/bacula Pid Directory = /var/run/bacula Maximum Concurrent Jobs = 20 #FDAddress = 127.0.0.1 FDAddress = 0.0.0.0 } # Send all messages except skipped files back to Director Messages { Name = Standard director = backuphost-1-dir = all, !skipped, !restored }

    Read the article

< Previous Page | 10 11 12 13 14 15  | Next Page >