Search Results

Search found 225 results on 9 pages for 'generators'.

Page 8/9 | < Previous Page | 4 5 6 7 8 9  | Next Page >

  • Python for a hobbyist programmer ( a few questions)

    - by Matt
    I'm a hobbyist programmer (only in TI-Basic before now), and after much, much, much debating with myself, I've decided to learn Python. I don't have a ton of free time to teach myself a hundred languages and all programming I do will be for personal use or for distributing to people who need them, so I decided that I needed one good, strong language to be good at. My questions: Is python powerful enough to handle most things that a typical programmer might do in his off-time? I have in mind things like complex stat generators based on user input for tabletop games, making small games, automate install processes, and build interactive websites, but probably a hundred things along those lines Does python handle networking tasks fairly well? Can python source be obscufated (mispelled I think), or is it going to be open-source by nature? The reason I ask this is because if I make something cool and distribute it, I don't want some idiot script kiddie to edit his own name in and say he wrote it And how popular is python, compared to other languages. Ideally, my language would be good and useful with help found online without extreme difficulty, but not so common that every idiot with computer knows python. I like the idea of knowing a slightly obscure language. Thanks a ton for any help you can provide.

    Read the article

  • plugin from github not successfully installing

    - by JohnMerlino
    Hey all, I tried to install the highcharts-rails plugin from github as specified in the instructions: Installation Get the plugin: script/plugin install git://github.com/loudpixel/highcharts-rails.git Run the rake setup: rake highcharts_rails:install But when I run the script/plugin install... It installs a couple of files only and not all the required files, I presume, because when I run rake highcharts_rails:install I get the following: rake aborted! Don't know how to build task 'highcharts_rails:install' All it installed for me was: jquery.js jrails.js jquery-ui.js I noticed on the site http://github.com/loudpixel/highcharts-rails It has all this: file MIT-LICENSE February 08, 2010 Initial commit [abbottry] file README.md February 09, 2010 Added installation section to README [jsiarto] file Rakefile February 08, 2010 Initial commit [abbottry] directory generators/ February 08, 2010 Initial commit [abbottry] file init.rb February 08, 2010 Initial commit [abbottry] directory javascripts/ February 08, 2010 Added jquery 1.3.2 script [abbottry] directory lib/ February 08, 2010 Initial commit [abbottry] directory tasks/ February 08, 2010 Incorrect path to plugin for rake task [abbottry] directory test/ February 08, 2010 Initial commit [abbottry] file uninstall.rb February 08, 2010 Initial commit [abbottry] So I'm not sure what I'm doing wrong to not get these files installed properly. Thanks for any response.

    Read the article

  • How can i ignore map property in NHibernate with setter

    - by Emilio Montes
    i need ignore map property with setter in NHibernate, because the relationship between entities is required. this is my simple model public class Person { public virtual Guid PersonId { get; set; } public virtual string FirstName { get; set; } public virtual string SecondName { get; set; } //this is the property that do not want to map public Credential Credential { get; set; } } public class Credential { public string CodeAccess { get; set; } public bool EsPremium { get; set; } } public sealed class PersonMap : ClassMapping<Person> { public PersonMap() { Table("Person"); Cache(x => x.Usage(CacheUsage.ReadWrite)); Id(x => x.Id, m => { m.Generator(Generators.GuidComb); m.Column("PersonId"); }); Property(x => x.FirstName, map => { map.NotNullable(true); map.Length(255); }); Property(x => x.SecondName, map => { map.NotNullable(true); map.Length(255); }); } } I know that if I leave the property Credential {get;} I was not going to take the map of NHibernate, but I need to set the value. Thanks in advance.

    Read the article

  • Documentation and Build system for Mono/C#

    - by dcolish
    I'm starting out on a new project and a team member has decided to use C# as the implementation language. I don't have a lot of experience in C#, but a brief reading shows that it's very capable of being a complete cross-platform vm. Beyond the language, I've been having trouble selecting tools and workflows for managing the code as the project grows. It should be fairly small (<10K lines) but I would like to have the ability to generate documentation as the project grows, manage any external dependencies that we decide to use, and automate builds and testing. I am wondering what tools are commonly used or considered best practices for this language. I am mainly concerned with how would a build system potentially work on *nix as well as windows? Are there C# specific tools or is Make more common? In addition, I'd like to use a dvcs, but it doesn't look like Visual Studio and MonoDevelop support the same ones. What's the common vcs of choice for C#? For testing sort of Unit testing is available for C#/Mono? Finally, I know that there are good doc generators, but with the question of the build system, I would really like to have that just be a single step in the build similar to how testing is a step. Normally I'd automate with Hudson, but I am wondering if there is something more specific to the platform. Overall, I'd love to see a solution that provides a decent workflow on both windows and *nix without a heavy admin burden. I am pretty sure this is the holy grail of project management, so anything that puts me on that path is awesome.

    Read the article

  • Free tools/libraries to compare tables with filtering for SQL Server 2008 and visualize/sync diffs t

    - by MicMit
    I am building certain GUI in C# for a content manager and looking for the tools or code snippets or open libraries ( code ideally in C# ) which allow me the following : 1. For table A in database X (test ) and table A in database Y (production) and for a simple filter ( e.g. listname = "XYZ" ) I need to show additions/deletions/updates in some way. which might be side-by-side or just html report 2 record added html table with some fields 2 record deleted html table with some fields Considering that this task is very common, I guess, certain components should exist ? Components either return some collections from parameters given for further visualizing or just produce reports mentioned above. 2. I need to push changes for the filter I mentioned in 1 and update table in production database for this filter only ( ie for the particular list approved by content person). Again probably there are certain SQL code generators - components in addition to diffs or standalone. 3. The key thing tools/libraries - should be suitable for integration with the existing application in C#.

    Read the article

  • Free tools/libraries to compare tables with filtering in different databases and visualize/sync diff

    - by MicMit
    I am building certain GUI in C# for a content manager and looking for the tools or code snippets or open libraries ( code ideally in C# ) which allow me the following : 1. For table A in database X (test ) and table A in database Y (production) and for a simple filter ( e.g. listname = "XYZ" ) I need to show additions/deletions/updates in some way. which might be side-by-side or just html report 2 record added html table with some fields 2 record deleted html table with some fields Considering that this task is very common, I guess, certain components should exist ? Components either return some collections from parameters given for further visualizing or just produce reports mentioned above. 2. I need to push changes for the filter I mentioned in 1 and update table in production database for this filter only ( ie for the particular list approved by content person). Again probably there are certain SQL code generators - components in addition to diffs or standalone. 3. The key thing tools/libraries - should be suitable for integration with the existing application in C#.

    Read the article

  • uninitialized constant Test::Unit::TestResult::TestResultFailureSupport

    - by Vitaly Kushner
    I get the error in subj when I'm trying to run specs or generators in a fresh rails project. This happens when I add shoulda to the mix. I added the following in the config/environment.rb: config.gem 'rspec', :version => '1.2.6', :lib => false config.gem 'rspec-rails', :version => '1.2.6', :lib => false config.gem "thoughtbot-shoulda", :version => "2.10.2", :lib => 'shoulda', :source => "http://gems.github.com" I'm on OSX. ruby 1.8.6 (2008-08-11 patchlevel 287) gems 1.3.5 rails 2.3.4 rspec - 1.2.6 shoulda - 2.10.2 test-unit - 2.0.3 I'm aware of this and adding config.gem 'test-unit', :lib => 'test/unit' indeed solves the genrator problem as it doesn't throw an exception, but it prints 0 tests, 0 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications at the end of the run so I suppose it tries to run tests which is unexpected and undesired, also the specs stop to run at all, seems like rspec is not running at all, when running rake spec I get the test-unit output again (with 0 tests as there are only specs, no tests defined)

    Read the article

  • python histogram one-liner

    - by mykhal
    there are many ways, how to code histogram in Python. by histogram, i mean function, counting objects in an interable, resulting in the count table (i.e. dict). e.g.: >>> L = 'abracadabra' >>> histogram(L) {'a': 5, 'b': 2, 'c': 1, 'd': 1, 'r': 2} it can be written like this: def histogram(L): d = {} for x in L: if x in d: d[x] += 1 else: d[x] = 1 return d ..however, there are much less ways, how do this in a single expression. if we had "dict comprehensions" in python, we would write: >>> { x: L.count(x) for x in set(L) } but we don't have them, so we have to write: >>> dict([(x, L.count(x)) for x in set(L)]) however, this approach may yet be readable, but is not efficient - L is walked-through multiple times, so this won't work for single-life generators.. the function should iterate well also through gen(), where: def gen(): for x in L: yield x we can go with reduce (R.I.P.): >>> reduce(lambda d,x: dict(d, x=d.get(x,0)+1), L, {}) # wrong! oops, does not work, the key name is 'x', not x :( i ended with: >>> reduce(lambda d,x: dict(d.items() + [(x, d.get(x, 0)+1)]), L, {}) (in py3k, we would have to write list(d.items()) instead of d.items(), but it's hypothethical, since there is no reduce there) please beat me with a better one-liner, more readable! ;)

    Read the article

  • What does O(log n) mean exactly?

    - by Andreas Grech
    I am currently learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the growth of the algorithm proportionally...and the same goes for, for example, quadratic time O(n2) etc..even algorithms, such as permutation generators, with O(n!) times, that grow by factorials. For example, the following function is O(n) because the algorithm grows in proportion to its input n: f(int n) { int i; for (i = 0; i < n; ++i) printf("%d", i); } Similarly, if there was a nested loop, the time would be O(n2). But what exactly is O(log n)? For example, what does it mean to say that the height of a complete binary tree is O(log n)? I do know (maybe not in great detail) what Logarithm is, in the sense that: log10 100 = 2, but I cannot understand how to identify a function with a logarithmic time.

    Read the article

  • Jekyll - How to approach asset processing (minification, spriting...)

    - by Gromix
    I recently switched to Jekyll and I find the conversion pipeline works really well. However I'm stuck on which approach to take when the process is many inputs to one output (ex: concatenating CSS files, creating image sprites...) I know several tools that can do it, that can be called either from the command line or in Ruby code directly. For ex: Jammit css sprites Compass sprites My current solution is a few Jekyll plugins that call these tools. However, it has the following problems: 1. SASS files should be processed, then concatenated/minified SASS-CSS is a Converter, and the concatenation is a Generator run on the output. Unfortunately generators are run first, which means the concatenation is always a step behind (I have to run the build twice) 2. Jekyll does not know about the source/output relationship With converters, when I run Jekyll in server mode, if I change a SASS file it automatically runs the conversion to CSS. When dealing with concatenation/spriting, I haven't found a way to do the same. I end up having to run a "normal" Jekyll build (not server auto) to update the concatenated files and sprites. Thanks for any ideas!

    Read the article

  • Can parser combinators be made efficient?

    - by Jon Harrop
    Around 6 years ago, I benchmarked my own parser combinators in OCaml and found that they were ~5× slower than the parser generators on offer at the time. I recently revisited this subject and benchmarked Haskell's Parsec vs a simple hand-rolled precedence climbing parser written in F# and was surprised to find the F# to be 25× faster than the Haskell. Here's the Haskell code I used to read a large mathematical expression from file, parse and evaluate it: import Control.Applicative import Text.Parsec hiding ((<|>)) expr = chainl1 term ((+) <$ char '+' <|> (-) <$ char '-') term = chainl1 fact ((*) <$ char '*' <|> div <$ char '/') fact = read <$> many1 digit <|> char '(' *> expr <* char ')' eval :: String -> Int eval = either (error . show) id . parse expr "" . filter (/= ' ') main :: IO () main = do file <- readFile "expr" putStr $ show $ eval file putStr "\n" and here's my self-contained precedence climbing parser in F#: let rec (|Expr|) (P(f, xs)) = Expr(loop (' ', f, xs)) and loop = function | ' ' as oop, f, ('+' | '-' as op)::P(g, xs) | (' ' | '+' | '-' as oop), f, ('*' | '/' as op)::P(g, xs) -> let h, xs = loop (op, g, xs) let op = match op with | '+' -> (+) | '-' -> (-) | '*' -> (*) | '/' -> (/) loop (oop, op f h, xs) | _, f, xs -> f, xs and (|P|) = function | '('::Expr(f, ')'::xs) -> P(f, xs) | c::xs when '0' <= c && c <= '9' -> P(int(string c), xs) My impression is that even state-of-the-art parser combinators waste a lot of time back tracking. Is that correct? If so, is it possible to write parser combinators that generate state machines to obtain competitive performance or is it necessary to use code generation?

    Read the article

  • How to convert source code to a xml based representation of the ast?

    - by autobiographer
    i wanna get a xml representation of the ast of java and c code. 3 months ago, i asked this question yet but the solutions weren't comfortable for me srcml seems to be a good solution for this problem but it does not support line numbers and columns but i need that feature. about elsa: cite: "There is ongoing effort to export the Elsa AST as an XML document; we expect to be able to advertise this in the next public release." dms... didn't understand that. especially for java, there is javaml which supports line numbers. but the sourceforge page doesn't list any files. question: there's software available which supports conversion of ast into xml which supports line numbers (and columns) [especially for java and c/c++]? is there an alternative to javaml and srcml? ps: i don't wanne have parser generators. i hope to find a tool which can be used on the console typing: ./my-xml-generator Test.java [or something like that]... or a java implementation would be great too.

    Read the article

  • Unit Testing a rails 2.3.5 plugin

    - by brad
    I'm writing a new plugin for a rails 2.3.5 app. I've included an app directory (which makes it an engine) so i can easily load some extra routes. Not sure if that affects anything. Anyway, in the test directory i have two files: test_helper.rb and my_plugin_test.rb These files were generated automatically using script/generate plugin my_plugin When I go to vendor/plugins/my_plugin directory and run rake test they don't seem to run. I get the following console output: (in /Users/me/Repos/my_app/source/trunk/vendor/plugins/my_plugin) /Users/me/.rvm/rubies/jruby-1.4.0/bin/jruby -I"lib:lib:test" "/Users/me/.rvm/gems/jruby-1.4.0/gems/rake-0.8.7/lib/rake/rake_test_loader.rb" "test/my_plugin_test.rb" So it obviously sees my test file, but none of the tests inside get run, I just get back to my console prompt. What am I missing here? I figured the generated code would work out of the box Here are the two files test_helper.rb require 'rubygems' require 'active_support' require 'active_support/test_case' my_plugin_test.rb require 'test_helper' class MyPluginTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end test "Factories are supported" do assert_not_nil Factory end end File structure vendor - plugins - my_plugin - app - config - routes.rb - generators - my_plugin - some generator files.rb - lib - my_plugin.rb - my_plugin - my_plugin_lib_file.rb - rails - init.rb - Rakefile - tasks - my_plugin_tasks.rake - test - test_helper.rb - my_plugin_test.rb

    Read the article

  • What would be the best way to install (distribute) dynamic libraries in Mac OSX using CMake/Cpack ?

    - by YuppieNetworking
    Hello all, I have a project whose artifacts are two dynamic libraries, let's say libX.dylib and libY.dylib (or .so for linux distributions). There are no executables. Now I would like to distribute these libraries. Since I already use CMake to compile it, I looked at CPack and successfully generated .tgz and .deb packages for Linux. However, for Mac OSX I have no idea and the CPack Wiki about its generators did not help me much. I managed to generate a PackageMaker package, but as clearly stated at this packagemaker howto, there is no uninstall option when using this util. I then read a bit about Bundles, but I feel lost specially since I have no executable. Question: What is the correct way to generate a package for Mac OSX using CPack? My ideal scenario would be either something that installs as easily as a bundle or as a deb file in debian/ubuntu. Thanks for your help Edit One more detail: the code to one of these libraries is not open, so I can't expect the users to do a cmake; make; make install That's why I want a .deb, .tar.gz, bundle or whatsoever.

    Read the article

  • Dynamic/Generic ViewModelBase?

    - by Shimmy
    I am learning MVVM now and I understand few things (more than but few are here..): Does every model potentially exposed (thru a VM) to the View is having a VM? For example, if I have a Contact and Address entity and each contact has an Addresses (many) property, does it mean I have to create a ContactViewModel and an AddressViewModel etc.? Do I have to redeclare all the properties of the Model again in the ViewModel (i.e. FirstName, LastName blah blah)? why not have a ViewModelBase and the ContactViewMode will be a subclass of ViewModelBase accessing the Entity's properties itself? and if this is a bad idea that the View has access to the entity (please explain why), then why not have the ViewModelBase be a DynamicObject (view the Dictionary example @ the link page), so I don't have to redeclare all the properties and validation over and over in the two tiers (M & VM) - because really, the View is anyway accessing the ViewModel's fields via reflection anyway. I think MVVM was the hardest technology I've ever learned. it doesn't have out-the-box support and there are to many frameworks and methods to achieve it, and in the other hand there is no arranged way to learn it (as MVC for instance), learning MVVM means browsing and surfing around trying to figure out what's better. Bottom line, what I mean by this section is please go and vote to MSFT to add MVVM support in the BCL and generators for VMs and Vs according to the Ms. Thanks

    Read the article

  • nhibernate mapping: delete collection, insert new collection with old IDs

    - by npeBeg
    my issue lokks similar to this one: (link) but i have one-to-many association: <set name="Fields" cascade="all-delete-orphan" lazy="false" inverse="true"> <key column="[TEMPLATE_ID]"></key> <one-to-many class="MyNamespace.Field, MyLibrary"/> </set> (i also tried to use ) this mapping is for Template object. this one and the Field object has their ID generators set to identity. so when i call session.Update for the Template object it works fine, well, almost: if the Field object has an Id number, UPDATE sql request is called, if the Id is 0, the INSERT is performed. But if i delete a Field object from the collection it has no effect for the Database. I found that if i also call session.Delete for this Field object, everything will be ok, but due to client-server architecture i don't know what to delete. so i decided to delete all the collection elements from the DB and call session.Update with a new collection. and i've got an issue: nhibernate performs the UPDATE operation for the Field objects that has non-zero Id, but they are removed from DB! maybe i should use some other Id generator or smth.. what is the best way to make nhibernate perform "delete all"/"insert all" routine for the collection?

    Read the article

  • do the Python libraries have a natural dependence on the global namespace?

    - by msw
    I first ran into this when trying to determine the relative performance of two generators: t = timeit.repeat('g.get()', setup='g = my_generator()') So I dug into the timeit module and found that the setup and statement are evaluated with their own private, initially empty namespaces so naturally the binding of g never becomes accessible to the g.get() statement. The obvious solution is to wrap them into a class, thus adding to the global namespace. I bumped into this again when attempting, in another project, to use the multiprocessing module to divide a task among workers. I even bundled everything nicely into a class but unfortunately the call pool.apply_async(runmc, arg) fails with a PicklingError because buried inside the work object that runmc instantiates is (effectively) an assignment: self.predicate = lambda x, y: x > y so the whole object can't be (understandably) pickled and whereas: def foo(x, y): return x > y pickle.dumps(foo) is fine, the sequence bar = lambda x, y: x > y yields True from callable(bar) and from type(bar), but it Can't pickle <function <lambda> at 0xb759b764>: it's not found as __main__.<lambda>. I've given only code fragments because I can easily fix these cases by merely pulling them out into module or object level defs. The bug here appears to be in my understanding of the semantics of namespace use in general. If the nature of the language requires that I create more def statements I'll happily do so; I fear that I'm missing an essential concept though. Why is there such a strong reliance on the global namespace? Or, what am I failing to understand? Namespaces are one honking great idea -- let's do more of those!

    Read the article

  • application specific seed data population

    - by user339108
    Env: JBoss, (h2, MySQl, postgres), JPA, Hibernate 3.3.x @Id @GeneratedValue(strategy = IDENTITY) private Integer key; Currently our primary keys are created using the above annotation. We expect to support a large number of users (~million users), what key should be used. Should it be Integer or Long or should I use the unsigned versions of the above declarations. We have a j2ee application which needs to be populated with some seed data on installation. On purchase, the customer creates his own data on top of the application. We just want to make sure that there is enough room to ship, modify or add data for future releases. What would be the best mechanism to support this, we had looked at starting all table identifiers from a certain id (say 1000) but this mandates modifying primary key generation to have table or sequence based generators and we have around ~100 tables. We are not sure if this is the right strategy for this. If we use a signed integer approach for the key, would it make sense to have the seed data as everything starting from 0 and below (i.e -ve numbers), so that all customer specific data will be available on 0 and above (i.e. +ve numbers)

    Read the article

  • What is the difference between these two linq implementations?

    - by Mahesh Velaga
    I was going through Jon Skeet's Reimplemnting Linq to Objects series. In the implementation of where article, I found the following snippets, but I don't get what is the advantage that we are gettting by splitting the original method into two. Original Method: // Naive validation - broken! public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Refactored Method: public static IEnumerable<TSource> Where<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { if (source == null) { throw new ArgumentNullException("source"); } if (predicate == null) { throw new ArgumentNullException("predicate"); } return WhereImpl(source, predicate); } private static IEnumerable<TSource> WhereImpl<TSource>( this IEnumerable<TSource> source, Func<TSource, bool> predicate) { foreach (TSource item in source) { if (predicate(item)) { yield return item; } } } Jon says - Its for eager validation and then defferring for the rest of the part. But, I don't get it. Could some one please explain it in a little more detail, whats the difference between these 2 functions and why will the validations be performed in one and not in the other eagerly? Conclusion/Solution: I got confused due to my lack of understanding on which functions are determined to be iterator-generators. I assumed that, it is based on signature of a method like IEnumerable<T>. But, based on the answers, now I get it, a method is an iterator-generator if it uses yield statements.

    Read the article

  • Prototyping Qt/C++ in Python

    - by tstenner
    I want to write a C++ application with Qt, but build a prototype first using Python and then gradually replace the Python code with C++. Is this the right approach, and what tools (bindings, binding generators, IDE) should I use? Ideally, everything should be available in the Ubuntu repositories so I wouldn't have to worry about incompatible or old versions and have everything set up with a simple aptitude install. Is there any comprehensive documentation about this process or do I have to learn every single component, and if yes, which ones? Right now I have multiple choices to make: Qt Creator, because of the nice auto completion and Qt integration. Eclipse, as it offers support for both C++ and Python. Eric (haven't used it yet) Vim PySide as it's working with CMake and Boost.Python, so theoretically it will make replacing python code easier. PyQt as it's more widely used (more support) and is available as a Debian package. Edit: As I will have to deploy the program to various computers, the C++-solution would require 1-5 files (the program and some library files if I'm linking it statically), using Python I'd have to build PyQt/PySide/SIP/whatever on every platform and explain how to install Python and everything else.

    Read the article

  • Is there any good reason for private methods existence in C# (and OOP in general)?

    - by Piotr Lopusiewicz
    I don't mean to troll but I really don't get it. Why would language designers allow private methods instead of some naming convention (see __ in Python) ? I searched for the answer and usual arguments are: a) To make the implementation cleaner/avoid long vertical list of methods in IDE autocompletion b) To announce to the world which methods are public interface and which may change and are just for implementation purpose c) Readability Ok so now, all of those could be achieved by naming all private methods with __ prefix or by "private" keyword which doesn't have any implications other than be information for IDE (don't put those in autocompletion) and other programers (don't use it unless you really must). Hell, one could even require unsafe-like keyword to access private methods to really discourage this. I am asking this because I work with some c# code and I keep changing private methods to public for test purposes as many in-between private methods (like string generators for xml serialization) are very useful for debugging purposes (like writing some part of string to log file etc.). So my question is: Is there anything which is achieved by access restriction but couldn't be achieved by naming conventions without restricting the access ?

    Read the article

  • UPS vs Solar Power in case of power failure for a server [on hold]

    - by Zen 8000k
    I am looking for a low power, low end pc able to run 24/7 without overheating and a way to support it in case of power failure. Power failures can be up to 72 hours. The pc dosen't need a monitor or keyboard. A modem must also be protected in case of power failure. When i say low end, i don't mean crap. The cpu needs to be x86 and have at least 1k cpu in this chart: http://www.cpubenchmark.net/index.php What's the best way to do this? EDIT: more info. I need to run a home server. The server will perform light tasks mainly. A x86 cpu sadly is the only route for my use. I want to be able to run the server and the router/modem in case of power failure. Now, regarding how long the power will fail: 1) 1 hours is OK for most situations. (say 90%) 2) 3 hours is OK (say 98%) 3) 6 hours is more thank OK. (say 99.5%) 4) On extreme cases the power might fail days. I believe this is very unlikely to happen. More is great but, really, how ofter power will fail more than 3 hours? I believe once every year at best. Well, that's too rare to care about. Given the above, I am looking for a cost effective way to archive 1-3 hour power or 6 hour if possible. Solutions: You guys give me great ideas. 1) Power generator: no good as power will fail for 10 seconds before returning. Also I read online, "clean" power generators cost 1.5k+, so it's out of budged. Non clean generator might damage electronics, right? 2) Solar power: i don't know for sure about this. Sounds like a great idea, too good to be true, honestly. For only 200$ i get 100+w? What are the drawbacks here? 3) UPS: This seems to be the best. The only problem is the cost. Cost < 200$ = great 400$ = budged limit

    Read the article

  • ORACLE RIGHTNOW DYNAMIC AGENT DESKTOP CLOUD SERVICE - Putting the Dynamite into Dynamic Agent Desktop

    - by Andreea Vaduva
    Untitled Document There’s a mountain of evidence to prove that a great contact centre experience results in happy, profitable and loyal customers. The very best Contact Centres are those with high first contact resolution, customer satisfaction and agent productivity. But how many companies really believe they are the best? And how many believe that they can be? We know that with the right tools, companies can aspire to greatness – and achieve it. Core to this is ensuring their agents have the best tools that give them the right information at the right time, so they can focus on the customer and provide a personalised, professional and efficient service. Today there are multiple channels through which customers can communicate with you; phone, web, chat, social to name a few but regardless of how they communicate, customers expect a seamless, quality experience. Most contact centre agents need to switch between lots of different systems to locate the right information. This hampers their productivity, frustrates both the agent and the customer and increases call handling times. With this in mind, Oracle RightNow has designed and refined a suite of add-ins to optimize the Agent Desktop. Each is designed to simplify and adapt the agent experience for any given situation and unify the customer experience across your media channels. Let’s take a brief look at some of the most useful tools available and see how they make a difference. Contextual Workspaces: The screen where agents do their job. Agents don’t want to be slowed down by busy screens, scrolling through endless tabs or links to find what they’re looking for. They want quick, accurate and easy. Contextual Workspaces are fully configurable and through workspace rules apply if, then, else logic to display only the information the agent needs for the issue at hand . Assigned at the Profile level, different levels of agent, from a novice to the most experienced, get a screen that is relevant to their role and responsibilities and ensures their job is done quickly and efficiently the first time round. Agent Scripting: Sometimes, agents need to deliver difficult or sensitive messages while maximising the opportunity to cross-sell and up-sell. After all, contact centres are now increasingly viewed as revenue generators. Containing sophisticated branching logic, scripting helps agents to capture the right level of information and guides the agent step by step, ensuring no mistakes, inconsistencies or missed opportunities. Guided Assistance: This is typically used to solve common troubleshooting issues, displaying a series of question and answer sets in a decision-tree structure. This means agents avoid having to bookmark favourites or rely on written notes. Agents find particular value in these guides - to quickly craft chat and email responses. What’s more, by publishing guides in answers on support pages customers, can resolve issues themselves, without needing to contact your agents. And b ecause it can also accelerate agent ramp-up time, it ensures that even novice agents can solve customer problems like an expert. Desktop Workflow: Take a step back and look at the full customer interaction of your agents. It probably spans multiple systems and multiple tasks. With Desktop Workflows you control the design workflows that span the full customer interaction from start to finish. As sequences of decisions and actions, workflows are unique in that they can create or modify different records and provide automation behind the scenes. This means your agents can save time and provide better quality of service by having the tools they need and the relevant information as required. And doing this boosts satisfaction among your customers, your agents and you – so win, win, win! I have highlighted above some of the tools which can be used to optimise the desktop; however, this is by no means an exhaustive list. In approaching your design, it’s important to understand why and how your customers contact you in the first place. Once you have this list of “whys” and “hows”, you can design effective policies and procedures to handle each category of problem, and then implement the right agent desktop user interface to support them. This will avoid duplication and wasted effort. Five Top Tips to take away: Start by working out “why” and “how” customers are contacting you. Implement a clean and relevant agent desktop to support your agents. If your workspaces are getting complicated consider using Desktop Workflow to streamline the interaction. Enhance your Knowledgebase with Guides. Agents can access them proactively and can be published on your web pages for customers to help themselves. Script any complex, critical or sensitive interactions to ensure consistency and accuracy. Desktop optimization is an ongoing process so continue to monitor and incorporate feedback from your agents and your customers to keep your Contact Centre successful.   Want to learn more? Having attending the 3-day Oracle RightNow Customer Service Administration class your next step is to attend the Oracle RightNow Customer Portal Design and 2-day Dynamic Agent Desktop Administration class. Here you’ll learn not only how to leverage the Agent Desktop tools but also how to optimise your self-service pages to enhance your customers’ web experience.   Useful resources: Review the Best Practice Guide Review the tune-up guide   About the Author: Angela Chandler joined Oracle University as a Senior Instructor through the RightNow Customer Experience Acquisition. Her other areas of expertise include Business Intelligence and Knowledge Management.  She currently delivers the following Oracle RightNow courses in the classroom and as a Live Virtual Class: RightNow Customer Service Administration (3 days) RightNow Customer Portal Design and Dynamic Agent Desktop Administration (2 days) RightNow Analytics (2 days) Rightnow Chat Cloud Service Administration (2 days)

    Read the article

  • Top ten things that don't make sense in The Walking Dead

    - by iamjames
    For those of you that don't know, The Walking Dead is a popular American TV show on AMC about a group of people trying to survive in a zombie-filled world.Here's the top ten eleven things that don't make sense on the show (and have never been explained) 1)  They never visit stores.  No Walmarts, Kmarts, Targets, shopping malls, pawn shops, gas stations, etc.  You'd think that would be the first place you'd visit for supplies, but they never have.  Not once.  There was a tiny corner store they visited in a small town, and while many products were already gone they did find several useful items.  2)  They never raid houses.  Why not?  One would imagine that they would want to search houses for useful items, but they don't.3)  They don't use 2 way radios.  Modern 2-way radios have a 36-mile range.  That's probably best possible range, but even if the range is only 10% of that, 3.6 miles, that's still more than enough for most situations, for the occasional "hey zombies attacking can you give me a hand?" or "there's zombies walking by stay inside until they leave" or "remember to pick up milk at the store love mom".  And yes they would need batteries or recharging, but they have been using gas-powered generators on the show and I'm sure a car charger would work.4)  They use gas-guzzling vehicles.  Every vehicle they have is from the 80s or 90s except for the new Kia SUV there for product placement.  Why?  They should all be driving new small SUVs or hybrids.  Visit a dealership and steal more fuel-efficient vehicles, because while the Walmart's might be empty from people raiding them for supplies, I'm sure most people weren't thinking "Gee, I should go car shopping" when the infection hit5)  They drive a motorcycle.  Seriously?  Let's find the least protective vehicle and drive that.  And while motorcycles get reasonable gas mileage, 5 people in a SUV gets better gas mileage per person than 5 people all driving motorcycles so it doesn't make economical sense either.6)  They drive loud vehicles.  The motorcycle used is commonly referred to as a chopper and is about as loud as a motorcycle can get.  The zombies are attracted to loud noise, so wouldn't it make more sense to drive vehicles that makes less sound?  Because as soon as you stop the bike and get off you're surrounded by zombies that heard you coming.  And it's not just the bike, the ~1980s Chevy SUV in the show is also very loud.7)  They never run out of food.  Seems like that would be a almost daily struggle, keeping enough food available for about a dozen people, yet I've never seen them visit a grocery store or local convenience store to stock up.8)  They don't carry swords, machetes, clubs, etc.  Let's face it, biting is not a very effective means of attack.  It's good for animals because they have fangs and little else, but humans have been finding better ways of killing each other since forever.  So why doesn't everyone on the show carry a sword or machete or at least a baseball bat?  Anything is better than wasting valuable bullets all the time.  Sure, dozen zombies approaching?  Shoot them.  One zombie approaching?  Save the bullet, cut off it's head.  9)  They do not wear protective clothing.  Human teeth are not exactly the sharpest teeth in the animal kingdom.  The leather shoes your dog ripped to shreds within minutes would probably take you days to bite through.  So why do they walk around half-naked?  Yes I know it's hot in Atlanta, but you'd think they'd at least have some tough leather coats or something for protection.  Maybe put a few small vent holes in the fabric if it's really hot.  Or better:  make your own chainmail.  Chainmail was used for thousands of years for protection from swords and is still used by scuba divers for protection from sharks.  If swords and sharks can't puncture it, human teeth don't stand a chance.  10)  They don't build barricades or dig trenches around properties.  In Season 2 they stayed at a farm in the middle of no where.  While being far away from people is a great way to stay far away from zombies, it would still make sense to build some sort of defenses.  Hordes of zombies would knock down almost any fence, but what about a trench or moat?  Maybe something not too wide so it can be jumped over easily but a zombie would fall into because I haven't seen too many jumping zombies on the show.  11)  They don't live in a mall or tall office building.  A mall would be perfect.  They have large security gates designed to keep even hundreds of people from breaking in and offer lots of supplies and food.  They're usually hundreds of thousands of square feet and fully enclosed, one could probably live their entire life happily in a mall.  Tall office building with on-site cafeteria would be another good choice.  They also usually offer good security and office furniture could be pushed out of the windows to crush approaching zombies, and the cafeteria is usually stocked to provide food for hundreds or thousands of office workers so food wouldn't be a problem for a long time. So there you have it, eleven things that don't make sense in The Walking Dead.  Have any of your own you'd like to add or were one of these things covered in the show?  Let me know in the comments.

    Read the article

  • Computer Networks UNISA - Chap 14 &ndash; Insuring Integrity &amp; Availability

    - by MarkPearl
    After reading this section you should be able to Identify the characteristics of a network that keep data safe from loss or damage Protect an enterprise-wide network from viruses Explain network and system level fault tolerance techniques Discuss issues related to network backup and recovery strategies Describe the components of a useful disaster recovery plan and the options for disaster contingencies What are integrity and availability? Integrity – the soundness of a networks programs, data, services, devices, and connections Availability – How consistently and reliably a file or system can be accessed by authorized personnel A number of phenomena can compromise both integrity and availability including… security breaches natural disasters malicious intruders power flaws human error users etc Although you cannot predict every type of vulnerability, you can take measures to guard against the most damaging events. The following are some guidelines… Allow only network administrators to create or modify NOS and application system users. Monitor the network for unauthorized access or changes Record authorized system changes in a change management system’ Install redundant components Perform regular health checks on the network Check system performance, error logs, and the system log book regularly Keep backups Implement and enforce security and disaster recovery policies These are just some of the basics… Malware Malware refers to any program or piece of code designed to intrude upon or harm a system or its resources. Types of Malware… Boot sector viruses Macro viruses File infector viruses Worms Trojan Horse Network Viruses Bots Malware characteristics Some common characteristics of Malware include… Encryption Stealth Polymorphism Time dependence Malware Protection There are various tools available to protect you from malware called anti-malware software. These monitor your system for indications that a program is performing potential malware operations. A number of techniques are used to detect malware including… Signature Scanning Integrity Checking Monitoring unexpected file changes or virus like behaviours It is important to decide where anti-malware tools will be installed and find a balance between performance and protection. There are several general purpose malware policies that can be implemented to protect your network including… Every compute in an organization should be equipped with malware detection and cleaning software that regularly runs Users should not be allowed to alter or disable the anti-malware software Users should know what to do in case the anti-malware program detects a malware virus Users should be prohibited from installing any unauthorized software on their systems System wide alerts should be issued to network users notifying them if a serious malware virus has been detected. Fault Tolerance Besides guarding against malware, another key factor in maintaining the availability and integrity of data is fault tolerance. Fault tolerance is the ability for a system to continue performing despite an unexpected hardware or software malfunction. Fault tolerance can be realized in varying degrees, the optimal level of fault tolerance for a system depends on how critical its services and files are to productivity. Generally the more fault tolerant the system, the more expensive it is. The following describe some of the areas that need to be considered for fault tolerance. Environment (Temperature and humidity) Power Topology and Connectivity Servers Storage Power Typical power flaws include Surges – a brief increase in voltage due to lightening strikes, solar flares or some idiot at City Power Noise – Fluctuation in voltage levels caused by other devices on the network or electromagnetic interference Brownout – A sag in voltage for just a moment Blackout – A complete power loss The are various alternate power sources to consider including UPS’s and Generators. UPS’s are found in two categories… Standby UPS – provides continuous power when mains goes down (brief period of switching over) Online UPS – is online all the time and the device receives power from the UPS all the time (the UPS is charged continuously) Servers There are various techniques for fault tolerance with servers. Server mirroring is an option where one device or component duplicates the activities of another. It is generally an expensive process. Clustering is a fault tolerance technique that links multiple servers together to appear as a single server. They share processing and storage responsibilities and if one unit in the cluster goes down, another unit can be brought in to replace it. Storage There are various techniques available including the following… RAID Arrays NAS (Storage (Network Attached Storage) SANs (Storage Area Networks) Data Backup A backup is a copy of data or program files created for archiving or safekeeping. Many different options for backups exist with various media including… These vary in cost and speed. Optical Media Tape Backup External Disk Drives Network Backups Backup Strategy After selecting the appropriate tool for performing your servers backup, devise a backup strategy to guide you through performing reliable backups that provide maximum data protection. Questions that should be answered include… What data must be backed up At what time of day or night will the backups occur How will you verify the accuracy of the backups Where and for how long will backup media be stored Who will take responsibility for ensuring that backups occurred How long will you save backups Where will backup and recovery documentation be stored Different backup methods provide varying levels of certainty and corresponding labour cost. There are also different ways to determine which files should be backed up including… Full backup – all data on all servers is copied to storage media Incremental backup – Only data that has changed since the last full or incremental backup is copied to a storage medium Differential backup – Only data that has changed since the last backup is coped to a storage medium Disaster Recovery Disaster recovery is the process of restoring your critical functionality and data after an enterprise wide outage has occurred. A disaster recovery plan is for extreme scenarios (i.e. fire, line fault, etc). A cold site is a place were the computers, devices, and connectivity necessary to rebuild a network exist but they are not appropriately configured. A warm site is a place where the computers, devices, and connectivity necessary to rebuild a network exists with some appropriately configured devices. A hot site is a place where the computers, devices, and connectivity necessary to rebuild a network exists and all are appropriately configured.

    Read the article

< Previous Page | 4 5 6 7 8 9  | Next Page >