Search Results

Search found 936 results on 38 pages for 'noob lurker'.

Page 17/38 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Text box input target iframe

    - by alex
    I'm an html noob and I just wanted to know if it's possible to make a text box in which you could type a website and when you click submit it will load the website in the iframe of your choice.

    Read the article

  • Capistrano asks for SSH password when deploying from local machine to server

    - by GhostRider
    When I try to ssh to a server, I'm able to do it as my id_rsa.pub key is added to the authorized keys in the server. Now when I try to deploy my code via Capistrano to the server from my local project folder, the server asks for a password. I'm unable to understand what could be the issue if I'm able to ssh and unable to deploy to the same server. $ cap deploy:setup "no seed data" triggering start callbacks for `deploy:setup' * 13:42:18 == Currently executing `multistage:ensure' *** Defaulting to `development' * 13:42:18 == Currently executing `development' * 13:42:18 == Currently executing `deploy:setup' triggering before callbacks for `deploy:setup' * 13:42:18 == Currently executing `db:configure_mongoid' * executing "mkdir -p /home/deploy/apps/development/flyingbird/shared/config" servers: ["dev1.noob.com", "176.9.24.217"] Password: Cap script: # gem install capistrano capistrano-ext capistrano_colors begin; require 'capistrano_colors'; rescue LoadError; end require "bundler/capistrano" # RVM bootstrap # $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require 'rvm/capistrano' set :rvm_ruby_string, 'ruby-1.9.2-p290' set :rvm_type, :user # or :user # Application setup default_run_options[:pty] = true # allow pseudo-terminals ssh_options[:forward_agent] = true # forward SSH keys (this will use your SSH key to get the code from git repository) ssh_options[:port] = 22 set :ip, "dev1.noob.com" set :application, "flyingbird" set :repository, "repo-path" set :scm, :git set :branch, fetch(:branch, "master") set :deploy_via, :remote_cache set :rails_env, "production" set :use_sudo, false set :scm_username, "user" set :user, "user1" set(:database_username) { application } set(:production_database) { application + "_production" } set(:staging_database) { application + "_staging" } set(:development_database) { application + "_development" } role :web, ip # Your HTTP server, Apache/etc role :app, ip # This may be the same as your `Web` server role :db, ip, :primary => true # This is where Rails migrations will run # Use multi-staging require "capistrano/ext/multistage" set :stages, ["development", "staging", "production"] set :default_stage, rails_env before "deploy:setup", "db:configure_mongoid" # Uncomment if you use any of these databases after "deploy:update_code", "db:symlink_mongoid" after "deploy:update_code", "uploads:configure_shared" after "uploads:configure_shared", "uploads:symlink" after 'deploy:update_code', 'bundler:symlink_bundled_gems' after 'deploy:update_code', 'bundler:install' after "deploy:update_code", "rvm:trust_rvmrc" # Use this to update crontab if you use 'whenever' gem # after "deploy:symlink", "deploy:update_crontab" if ARGV.include?("seed_data") after "deploy", "db:seed" else p "no seed data" end #Custom tasks to handle resque and redis restart before "deploy", "deploy:stop_workers" after "deploy", "deploy:restart_redis" after "deploy", "deploy:start_workers" after "deploy", "deploy:cleanup" 'Create symlink for public uploads' namespace :uploads do task :symlink do run <<-CMD rm -rf #{release_path}/public/uploads && mkdir -p #{release_path}/public && ln -nfs #{shared_path}/public/uploads #{release_path}/public/uploads CMD end task :configure_shared do run "mkdir -p #{shared_path}/public" run "mkdir -p #{shared_path}/public/uploads" end end namespace :rvm do desc 'Trust rvmrc file' task :trust_rvmrc do run "rvm rvmrc trust #{current_release}" end end namespace :db do desc "Create mongoid.yml in shared path" task :configure_mongoid do db_config = <<-EOF defaults: &defaults host: localhost production: <<: *defaults database: #{production_database} staging: <<: *defaults database: #{staging_database} EOF run "mkdir -p #{shared_path}/config" put db_config, "#{shared_path}/config/mongoid.yml" end desc "Make symlink for mongoid.yml" task :symlink_mongoid do run "ln -nfs #{shared_path}/config/mongoid.yml #{release_path}/config/mongoid.yml" end desc "Fill the database with seed data" task :seed do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake db:seed" end end namespace :bundler do desc "Symlink bundled gems on each release" task :symlink_bundled_gems, :roles => :app do run "mkdir -p #{shared_path}/bundled_gems" run "ln -nfs #{shared_path}/bundled_gems #{release_path}/vendor/bundle" end desc "Install bundled gems " task :install, :roles => :app do run "cd #{release_path} && bundle install --deployment" end end namespace :deploy do task :start, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Restart the app" task :restart, :roles => :app do run "touch #{current_path}/tmp/restart.txt" end desc "Start the workers" task :stop_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:stop_workers" end desc "Restart Redis server" task :restart_redis do "/etc/init.d/redis-server restart" end desc "Start the workers" task :start_workers do run "cd #{current_path}; RAILS_ENV=#{default_stage} bundle exec rake resque:start_workers" end end

    Read the article

  • Prevent auto mounting Android sdcard under Linux Mint

    - by BullShark
    I recently obtained an older Android phone, so that I could test Android Apps on it. I've needed it because I have a Nexus 7 but not older Android versions, hardware, etc. to test on. I'm having a problem with it under Linux Mint with Cinnamon. When I plug the phone in, or remove and plug the sdcard from the phone back to it while the phone is plugged in, Linux automatically mounts the sdcard. This is a problem because once it is mounted under Linux, it dismounts from the phone running Android 2.3.5, and I can no longer test Android Apps I write that require the sdcard to be present, writable. I went to Menu System Tools System Settings System Details Removable Media, and it brings up this window. I have changed the settings to always "Ask what to do" on "Select how media should be handled". However, the sdcard still gets mounted and then I am asked how I want to open these files (media players, photo importers, file browser, etc.). If I click the checkbox for "Never prompt or start programs on media insertion", then the sdcard is mounted, and I am not asked how to open these files. Eject is just a noob word for Ubuntu users that means umount (unmount) like "Adminstrator" is another ubuntu noob word for the root user. And if I unmount the sdcard, the phone doesn't recognize it again until I take the sdcard out and plug it back in. The phone sees it for a brief moment until Linux Mint takes it over. There are 2 possible solutions and maybe more: 1) Prevent Linux from automounting sdcards some how 2) Tell Android not to allow the computer it is plugged into to take over the sdcard, HOW? Edit: I found out how to prevent the sdcard from being automatically mounted: Now it gets recognized by Linux: bullshark@beastlinux ~ $ dmesg | tail -n 25 [597212.218323] sd 21:0:0:0: [sde] Attached SCSI removable disk [597212.218639] sr 21:0:0:1: Attached scsi CD-ROM sr2 [597212.218910] sr 21:0:0:1: Attached scsi generic sg7 type 5 [597217.139373] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597217.140726] sd 21:0:0:0: [sde] No Caching mode page present [597217.140735] sd 21:0:0:0: [sde] Assuming drive cache: write through [597217.143595] sd 21:0:0:0: [sde] No Caching mode page present [597217.143602] sd 21:0:0:0: [sde] Assuming drive cache: write through [597217.152240] sde: sde1 [597389.751008] 4:2:1: cannot get freq at ep 0x84 [597390.238742] 4:2:1: cannot get freq at ep 0x84 [597624.903132] sde: detected capacity change from 1977614336 to 0 [597637.677763] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597637.679616] sd 21:0:0:0: [sde] No Caching mode page present [597637.679626] sd 21:0:0:0: [sde] Assuming drive cache: write through [597637.682508] sd 21:0:0:0: [sde] No Caching mode page present [597637.682515] sd 21:0:0:0: [sde] Assuming drive cache: write through [597637.692758] sde: sde1 [597661.857979] sde: detected capacity change from 1977614336 to 0 [597688.775455] sd 21:0:0:0: [sde] 3862528 512-byte logical blocks: (1.97 GB/1.84 GiB) [597688.776814] sd 21:0:0:0: [sde] No Caching mode page present [597688.776823] sd 21:0:0:0: [sde] Assuming drive cache: write through [597688.780055] sd 21:0:0:0: [sde] No Caching mode page present [597688.780062] sd 21:0:0:0: [sde] Assuming drive cache: write through [597688.788639] sde: sde1 bullshark@beastlinux ~ $ However, the phone still unmounts the sdcard upon being detected by Linux. Linux detects but does not mount, and a few seconds later: Edit #2 (Solution): I solved this one by changing the usb connection type (was usb mass storage) :

    Read the article

  • 2D Rendering with OpenGL ES 2.0 on Android (matrices not working)

    - by TranquilMarmot
    So I'm trying to render two moving quads, each at different locations. My shaders are as simple as possible (vertices are only transformed by the modelview-projection matrix, there's only one color). Whenever I try and render something, I only end up with slivers of color! I've only done work with 3D rendering in OpenGL before so I'm having issues with 2D stuff. Here's my basic rendering loop, simplified a bit (I'm using the Matrix manipulation methods provided by android.opengl.Matrix and program is a custom class I created that just calls GLES20.glUniformMatrix4fv()): Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); program.setUniformMatrix4f("Projection", projection); At this point, I render the quads (this is repeated for each quad): Matrix.setIdentityM(modelview, 0); Matrix.translateM(modelview, 0, quadX, quadY, 0); program.setUniformMatrix4f("ModelView", modelview); quad.render(); // calls glDrawArrays and all I see is a sliver of the color each quad is! I'm at my wits end here, I've tried everything I can think of and I'm at the point where I'm screaming at my computer and tossing phones across the room. Anybody got any pointers? Am I using ortho wrong? I'm 100% sure I'm rendering everything at a Z value of 0. I tried using frustumM instead of orthoM, which made it so that I could see the quads but they would get totally skewed whenever they got moved, which makes sense if I correctly understand the way frustum works (it's more for 3D rendering, anyway). If it makes any difference, I defined my viewport with GLES20.glViewport(0, 0, windowWidth, windowHeight); Where windowWidth and windowHeight are the same values that are pased to orthoM It might be worth noting that the android.opengl.Matrix methods take in an offset as the second parameter so that multiple matrices can be shoved into one array, so that'w what the first 0 is for For reference, here's my vertex shader code: uniform mat4 ModelView; uniform mat4 Projection; attribute vec4 vPosition; void main() { mat4 mvp = Projection * ModelView; gl_Position = vPosition * mvp; } I tried swapping Projection * ModelView with ModelView * Projection but now I just get some really funky looking shapes... EDIT Okay, I finally figured it out! (Note: Since I'm new here (longtime lurker!) I can't answer my own question for a few hours, so as soon as I can I'll move this into an actual answer to the question) I changed Matrix.orthoM(projection, 0, 0, windowWidth, 0, windowHeight, -1, 1); to float ratio = windowWwidth / windowHeight; Matrix.orthoM(projection, 0, 0, ratio, 0, 1, -1, 1); I then had to scale my projection matrix to make it a lot smaller with Matrix.scaleM(projection, 0, 0.05f, 0.05f, 1.0f);. I then added an offset to the modelview translations to simulate a camera so that I could center on my action (so Matrix.translateM(modelview, 0, quadX, quadY, 0); was changed to Matrix.translateM(modelview, 0, quadX + camX, quadY + camY, 0);) Thanks for the help, all!

    Read the article

  • A (Late) Meme Monday Post: On SQLFamily

    - by Argenis
      Yesterday a member of the SQL community who I deeply admire sent me a DM on Twitter asking whether I had done a SQLFamily post for Thomas LaRock’s (blog|@SQLRockstar) Meme Monday for November. I replied that I did not, and I regretted not having done so. A subtle DM followed my response: “Get on it, you have all week”. And indeed I must. So here’s an attempt to express some of my feelings on a community that has catapulted my career like nothing else before I embraced it. Nanos Gigantium Humeris Insidentes I stand on the shoulders of giants. My SQLFamily has given me support at all levels. Professionally and personally. There is never a lack of will to help and provide advice to others in this community. And I do my best to help. On #SQLHelp on Twitter, via email, or even on the phone. I expect no retribution, because I know that when and if I do run into problems, my SQLFamily will be there for me. I have met some of the most humble, dedicated and most professional people in the SQL community. And some of them have pretty big titles: MVPs, MCMs, Regional Mentors, and even leaders of PASS, SQLCAT members, and even PMs and Devs on the SQL Server team. All are welcome, and that includes YOU! I have also met some people that are rather reserved and don’t participate as much in the community, for whatever reason. Be as it may, let it be know to all that we are a very welcoming community – heck, some of my closest friends and people I can count on in the community have completely opposite political views. We share one goal: to get better and help others get better. Even if you are a lurker – my hope is that one day you’ll decide to give back some of what you have learned. You have to take it to the next level On one of my previous jobs as an IT Supervisor I used to tell my team all the time about the benefits of continuous education and self-driven learning. Shortly after I left that job, the company went bankrupt and some of my staff got laid off – some without any severance pay whatsoever. I eventually found out that some of them had a really hard time finding another job, because their skills were simply outdated. They had become stale professionals. Don’t be one of them. If you don’t take advantage of these learning resources, somebody else will – and that person has an advantage over you when applying for that awesome job position that got opened. There’s a severe shortage of good DBAs and DB Devs out there. What’s your excuse for not being excellent? Even if your knowledge of SQL Server is at the beginner level, really – you have no excuse to get better. Just go to SQLUniversity and learn from there. Don’t get stale! Thank You To all of you in the SQL community who put so much time and energy into helping others, my deepest gratitude to you. I can’t wait to meet you all again at the next event and share our SQL stories over a pint of beer (or a shot of Jaeger) Cheers! -Argenis

    Read the article

  • Need Help finding an appropriate task asignment algoritm for a collage project involving coordinatin

    - by Trif Mircea
    Hello. I am a long time lurker here and have found over time many answers regarding jquery and web development topics so I decided to ask a question of my own. This time I have to create a c++ project for collage which should help manage the workflow of a company providing all kinds of services through in the field teams. The ideas I have so far are: client-server application; the server is a dispatcher where all the orders from clients get and the clients are mobile devices (PDAs) each team in the field having one a order from a client is a task. Each task is made up of a series of subtasks. You have a database with estimations on how long a task should take to complete you also know what tasks or subtasks each team on the field can perform based on what kind of specialists made up the team (not going to complicate the problem by adding needed materials, it is considered that if a member of a team can perform a subtask he has the stuff needed) Now knowing these factors, what would a good task assignment algorithm be? The criteria is: how many tasks can a team do, how many tasks they have in the queue, it could also be location, how far away are they from the place but I don't think I can implement that.. It needs to be efficient and also to adapt quickly is the human dispatcher manually assigns a task. Any help or leads would be really appreciated. Also I'm not 100% sure in the idea so if you have another way you would go about creating such an application please share, even if it just a quick outline. I have to write a theoretical part too so even if the ideas are far more complex that what i outlined that would be ok ; I'd write those and implement what I can.

    Read the article

  • Need Help finding an appropriate task assignment algorithm for a college project involving coordinat

    - by Trif Mircea
    I am a long time lurker here and have found over time many answers regarding jquery and web development topics so I decided to ask a question of my own. This time I have to create a c++ project for college which should help manage the workflow of a company providing all kinds of services through in the field teams. The ideas I have so far are: client-server application; the server is a dispatcher where all the orders from clients get and the clients are mobile devices (PDAs) each team in the field having one a order from a client is a task. Each task is made up of a series of subtasks. You have a database with estimations on how long a task should take to complete you also know what tasks or subtasks each team on the field can perform based on what kind of specialists made up the team (not going to complicate the problem by adding needed materials, it is considered that if a member of a team can perform a subtask he has the stuff needed) Now knowing these factors, what would a good task assignment algorithm be? The criteria is: how many tasks can a team do, how many tasks they have in the queue, it could also be location, how far away are they from the place but I don't think I can implement that.. It needs to be efficient and also to adapt quickly is the human dispatcher manually assigns a task. Any help or leads would be really appreciated. Also I'm not 100% sure in the idea so if you have another way you would go about creating such an application please share, even if it just a quick outline. I have to write a theoretical part too so even if the ideas are far more complex that what i outlined that would be ok ; I'd write those and implement what I can. Edit: C++ is the only language I know unfortunately.

    Read the article

  • ASP.NET MVC twitter/myspace style routing

    - by Astrofaes
    Hi guys, This is my first post after being a long-time lurker - so please be gentle :-) I have a website similar to twitter, in that people can sign up and choose a 'friendly url', so on my site they would have something like: mydomain.com/benjones I also have root level static pages such as: mydomain.com/about and of course my homepage: mydomain.com/ I'm new to ASP.NET MVC 2 (in fact I just started today) and I've set up the following routes to try and achieve the above. public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("content/{*pathInfo}"); routes.IgnoreRoute("images/{*pathInfo}"); routes.MapRoute("About", "about", new { controller = "Common", action = "About" } ); // User profile sits at root level so check for this before displaying the homepage routes.MapRoute("UserProfile", "{url}", new { controller = "User", action = "Profile", url = "" } ); routes.MapRoute("Home", "", new { controller = "Home", action = "Index", id = "" } ); } For the most part this works fine, however, my homepage is not being triggered! Essentially, when you browser to mydomain.com, it seems to trigger the User Profile route with an empty {url} parameter and so the homepage is never reached! Any ideas on how I can show the homepage?

    Read the article

  • Understanding Basic Prototyping & Updating Key/Value pairs

    - by JordanD
    First time poster, long time lurker. I'm trying to learn some more advanced features of .js, and have two ojectives based on the pasted code below: I would like to add methods to a parent class in a specific way (by invoking prototype). I intend to update the declared key/value pairs each time I make an associated method call. execMTAction as seen in TheSuper will execute each function call, regardless. This is by design. Here is the code: function TheSuper(){ this.options = {componentType: "UITabBar", componentName: "Visual Browser", componentMethod: "select", componentValue: null}; execMTAction(this.options.componentType, this.options.componentName, this.options.componentMethod, this.options.componentValue); }; TheSuper.prototype.tapUITextView = function(val1, val2){ this.options = {componentType: "UITextView", componentName: val1, componentMethod: "entertext", componentValue: val2}; }; I would like to execute something like this (very simple): theSuper.executeMTAction(); theSuper.tapUITextView("a", "b"); Unfortunately I am unable to overwrite the "this.options" in the parent, and the .tapUITextView method throws an error saying it cannot find executeMTAction. All I want to do, like I said, is to update the parameters in the parent, then have executeMTAction run each time I make any method call. That's it. Any thoughts? I understand this is basic but I'm coming from a long-time procedural career and .js seems to have this weird confluence of oo/procedural that I'm having a bit of difficulty with. Thanks for any input!

    Read the article

  • FreeNas on Dell Powervault 745N: 2TB Limit?

    - by willoller
    I want to purchase 2 x 2TB drives, and install FreeNas on my Dell Powervault 745N. People on the internets seem to be having trouble with the MD3000 firmware, and I want to make sure I can solve any issues before buying the drives. Before I invest, I have 3 questions : Is there a partition size limit determined by the RAID controller? That is, could I have a striped 4TB partition? The spec sheets make me wonder if the RAID controller needs all 4 drives in order to work. Is there any reason this will have to run in RAID5? If I buy 4 matching drives, would the controller support a RAID6 configuration? I'm basically new to all this RAID stuff - sorry for any noob questions.

    Read the article

  • Best Practices vs Reality

    - by RonHill
    On a scale depicting how closely best practices are followed, with "always" on one end and "never" on the other, my current company falls uncomfortably close to the latter. Just a couple trivial examples: We have no code review process There is very little documentation despite a very large code base (and some of it is blatantly incorrect/misleading) Untested/buggy/uncompilable code is frequently checked in to source control It is comically complicated to create a debuggable build for some of our components because of its underlying architecture. Unhandled exceptions are not uncommon in our releases Empty Catch{ } blocks are everywhere. Now, with the understanding that it's neither practical nor realistic to follow ALL best practices ALL the time, my question is this: How closely have commonly accepted best practices been followed at the companies you've worked for? I'm kind of a noob--this is only the second company I've worked for--so I'm not sure if I'm just more of an anal retentive coder or if I've just ended up at mediocre companies. My guess (hope?) is the latter, but a coworker with way more experience than me says every company he's ever worked for is like this. Given the obvious benefits of following most best practices most of the time, I find it hard to believe it's like this everywhere. Am I wrong?

    Read the article

  • Ubuntu 13.04 running really slow and Hanging

    - by CAM
    Up till recently I have been running 13.04 on my laptop very happily. This morning however, I turned on my laptop to find it running really slow. Takes 5 min to load a program and even then the program freezes and I have had 3 system hangs this morning already. The Unity Desktop appears to run ok but programs do not. Things I have tried so far: Checking for Propitiatory graphics drivers - none shown available (I have bumblebee running already). Using the recovery boot options from Grub to repair broken packages. Recent changes - Updated computer, Installed some indicator applets which have worked fine for me before. System Specs: Asus U36s, Intel Core i5-2450M 2.5GHz, 4GB RAM, Nvidia Geforce 610M-1GB, Dual boot Win7 & Ubuntu 13.04 I'm a bit of a noob with Ubuntu but am happy enough running stuff in terminal if you will advise me on what to run. I'm just a bit stuck on what do to fix this without a reinstall. Thanks a lot for your help.

    Read the article

  • Server 2008 print server down / access denied

    - by johnnyb10
    I have two Server 2008 servers (both running as VMs in VMware). One is a Full Installation, and the other is a Server Core installation. I just installed Print Services on both of them. In Print Management on the Full server, I added the Server Core print server (so now two print servers are listed in Print Management). However, the icon for my Server Core print server has a red, down-pointing arrow (indicating that it is down, I suppose). And when I right-click it and click Add Printer, I get a message saying that access is denied. Can someone tell me how to bring up or check on the status of the Server Core print server. Obviously, I'm somewhat of a noob with this stuff. Thanks in advance...

    Read the article

  • Huawei E170 on Linux?

    - by torbengb
    Related to this question, I need to know if the specific combination of Ubuntu 9.10 + the Huawei E170 (HSDPA broadband modem USB stick) will work? Bonus points for a link to a webpage that decsribes exactly how it is done! Specifically, I'm in Austria and the telco is A1, but I hope that the setup would be the same regardless of location/provider. I have found these two pages that seem horribly complicated to a Linux noob. Is there a simpler way, or do I really need to dive into that? Your input is much appreciated! If I can get confirmation that it's supported, then I'd switch to Ubuntu Netbook Remix, because I'm already running Ubuntu on the main pc at home and I'd like to keep things simple.

    Read the article

  • How to deploy local project to Amazon

    - by Nai
    I have a small webapp written in Python/Django which works fine on my local machine. I've been tinkering and setting up my server on the free tier of Amazon EC2 by following online tutorials. However, the tutorials I have found so far shows you how to setup your instance and stops there. So my question is, how do I get my local webapp onto my Amazon instance? FYI, I'm a sys admin/web dev. noob. Thanks.

    Read the article

  • Scheduling of jobs in the presence of constraints in Java

    - by Asgard
    I want to know how to implement a solution to this problem: A task is performed by running, by more people, some basic jobs with known duration in time units (days, months, etc..). The execution of the jobs could lead to the existence of time constraints: a job, for example, can not start if it is not over another (or others) and so on. I want to design and build an application to check the correctness of jobs activities and to propose a schedule of jobs, if any, which is respectful of the constraints. Input must provide the jobs and associated constraints. The expected output is the scheduling of jobs. The specification of an elementary job consists of the pair <jobs-id, duration> A constraint is expressed by means of a quintuple of the type <S/E, id-job1, B/A, S/E, id-job2> the beginning (S) or the end (E) of a jobs Id-job1, must take place before (B) / after (A) of the beginning (S) / end (E) of the Id-job2. If there are no dependencies between some jobs, then jobs can be done before, in parallel. As a simple example, consider the input: jobs jobs(0, 3) jobs(1, 4) jobs(2, 5) jobs(3, 3) jobs(4, 3) constraints constraints(S, 1, A, E, 0) constraints(S, 4, A, E, 2) Possible output: t 0 1 2 3 4 0 * - * * - 1 * - * * - 2 * - * * - 3 * - * * - 4 - * * - - 5 - * * - - 6 - * - - * 7 - * - - * 8 - * - - * 9 - - - - * How to code an efficient java scheduler(avoiding the intense backtracking if is possible) to manage the jobs with these constraints, as described??? I have seen a discussion on a thread in a forum where an user seems has solved the problem easily, but He haven't given enough details to the users to compile a working project(I'm noob), and I'm interested to know an effective implementation of the solution (without using external libraries). If someone help me, I'll give to him a very good feedback ;)

    Read the article

  • Copying files SSH vs sFTP

    - by jackquack
    I'm a bit of a unix noob, but this question seems super basic, yet I can't find an answer anywhere. Basically, to my knowledge, sFTP is just FTP over ssh. So, why can't I drag and drop files from one folder to another on the server side like I can on ssh. Why when I want to unzip a .tar in a server folder, does it first want to copy it to my machine and then back? Why can't it just unzip like it can when I'm using the command line. I know that when I use the command line it is using the resources of the remote machine, but why can't sFTP do that too? Is there a way to execute commands which I would normally do over SSH, but in a gui? I'm tried mapping to the drive to my own machine, I've tried so many sFTP clients that it's silly. Is there another class of program that I just don't know of?

    Read the article

  • How to protect my VPS from winlogon RDP spam requests

    - by Valentin Kuzub
    I got some hackers constantly hitting my RDP and generating thousands of audit failures in event log. Password is pretty elaborate so I dont think bruteforcing will get them anywhere. I am using VPS and I am pretty much a noob in Windows Server security (am a programmer myself and its my webserver for my site). Which is a recommended approach to deal with this? I would rather block IPs after some amount of failures for example. Sorry if question is not appropriate.

    Read the article

  • Back up Windows 2008 SBS to iSCSI disk

    - by Farseeker
    I've almost no experience with SBS 2008, so please excuse my noob question! SBS 2008 only has the most basic backup utility built in as far as I can tell (similar to Vista), and it will only back up to physical volumes. I've read that you can set up a batch task to backup to a network volume, but right now I just need to get something deployed ASAP. We have an iSCSI target with plenty of free space. Is it worth backing up to an iSCSI target? Or am I wasting my time? If I need to do a recovery from the iSCSI disk, how would I go about it?

    Read the article

  • file error /boot/grub/i386-pc/normal.mod trying to repair boot, live dvd install probleml

    - by user179295
    I have seen that there are a lot of threads about this problem. I had Windows 8 installed on my series 3 samsung i5 computer and I tried to install ubuntu 13.04. This is what I did: Because of the secure boot I can't install ubuntu from the dvd. So I went in the bios and disabled secure boot and enabled ''CSM''. I went out of the bios and windows 8 couldn't boot more. So I follow a guide on this thread ( Installing on a Pre-Installed Windows 8 System (UEFI Supported) ) and on ubuntu I tried to repair the boot by inserting this code in the terminal: sudo add-apt-repository ppa:yannubuntu/boot-repair sudo apt-get update sudo apt-get install boot-repair Then I ran boot-repair and I follow all the steps. Then I reboot the sistem and saw the black screen that says: error: file '/boot/grub/i386-pc/normal mod' not found grub rescure> Now I saw a lot of guides about this problem but I can't understand how to reistall ubuntu trough the live dvd that I used to install it the first time... I put it in the computer but nothing appears.. so what should I do now? I'm a noob on ubuntu and I have read all the things about this grub 2 install and know where the problem comes from but how to start the dvd??

    Read the article

  • New XEN Server, Intel i7, Errors were encountered while processing: xen-linux-system-amd64

    - by Sheldon
    I have just got a new machine to run XEN VM's on, it has an Intel i7 processor: - Intel Haswell Core i7-4790 3.6GHz 8MB LGA1150 I have setup the host with the current 6.2.0 I have set up a new Debian 7 64bit VM and any package I try and run fails with the following errors: Errors were encountered while processing: xen-utils-common xen-utils-4.1 xen-system-amd64 xen-linux-system-3.2.0-4-amd64 xen-linux-system-amd64 E: Sub-process /usr/bin/dpkg returned an error code (1) Excuse my noob-ness but should it even be running an AMD package ? Any ideas on how to fix this ? Thanks

    Read the article

  • Ubuntu eats itself after I followed updater instruction

    - by Tony Martin
    Updater (I assume) put a no entry style alert icon on the panel which informed me that certain package dependencies were not up to snuff. Upgrades were thereafter only partial. The dialogue advised that I (and this is from noob memory) sudo apt-get install -f. I did this and typed in the confirmation phrase and watched apt-get systematically remove every component of linux, both the stuff I installed and the core ubuntu packages. I could only assume at this stage that this was for a fresh install but of course, I know better now. There's much complaint about Windows, but I've never met with advice from Microsoft tools to wipe out the operating system because of a couple of missing .dlls. So what gives? This was a 64 bit install of 12.04. All that is left is grub pointing to a couple of windows recovery partitions on the hard drive. I'm tempted, but I have hopes of recovering the data that I had enough misguided faith to trust to the linux ext4 partition. I've tried pen driving back into it with a 32 bit iso but I'm simply informed that ubuntu is running in low graphics mode and get to watch the dots cycle indefinitely. EDIT: Thanks for the advice vis positive request. I've got onto the machine with a 64 bit stick and can see the file structure left behind by the installation. My first instinct was to run install from the stick but it did not seem to offer a recovery option. My question then: is there a way to recover the current installation so that if I reinstall the packages I had they will pick up the original settings. I'm particularly worried about losing email from evolution - the rest I could probably lash back together. I would also be interested how this disaster came about. I see people in the know recommending this same procedure in similar circumstances. Thanks for your attention, Tony Martin

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >